var/home/core/zuul-output/0000755000175000017500000000000015137151656014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137164624015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000360572315137164440020273 0ustar corecore |ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD -/~FlEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5a% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHMeBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5n|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIoNbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;o.K'[-הp|A*Z*}QJ0SqAYE0i5P-$̿<_d^"]}Z|-5rC wjof'(%*݅^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?C}!w,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2ah?lm$K/$s_. WM]̍"W%`lO2-"ew@E=0D"\KjPQ>Y{Ÿ>14`S\Nw 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%ה7h\%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7b/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI+mj(^>c/"ɭex^k$# $V :]PGszyH(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A}yE NB au)T܂;S䤽|7,)CfHCH#IY]tNWA̕uF&Ix.Tpׯnn|ޞʚ[Ưy.xF%ڄPw5fc=f짩Q{rhbԉ]eH'm%=X |hM6rN+LxE>^DݮEڬTk1+trǴ5RHİ{qJ\}X` >+%ni3+(0m8HЭ*zAep!*)jxG:Up~gfu#x~ .2ןGRLIۘT==!TlN3ӆv%#oV}N~ˊc,_,=COU C],Ϣa!L}sy}u\0U'&2ihbvz=.ӟk ez\ƚO; -%M>AzzGvݑT58ry\wW|~3Ԟ_f&OC"msht: rF<SYi&It1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]y#z18m@n1YHR=53hHT( Q(e@-#!'^AK$wTg1!H$|HBTf̋ Y@Mwq[Fī h[W,Ê=j8&d ԋU.I{7O=%iG|xqBչ̋@1+^.r%V12, _&/j"2@+ wm 4\xNtˆ;1ditQyc,m+-!sFɸv'IJ-tH{ "KFnLRH+H6Er$igsϦ>QKwҰ]Mfj8dqV+"/fC Q`B 6כy^SL[bJgW^;zA6hrH#< 1= F8) 򃟤,ŏd7>WKĉ~b2KQdk6՛tgYͼ#$eooԦ=#&d.09DHN>AK|s:.HDŽ">#%zNEt"tLvfkB|rN`)81 &ӭsēj\4iO,H̎<ߥ諵z/f]v2 0t[U;;+8&b=zwɓJ``FiQg9XʐoHKFϗ;gQZg܉?^/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}$;6q6^9.EPHŽ{pN>`cZV yB?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ėƩwmm#~9/Ddx'``4ɦc‹lϯ*eN*Nb'񵾪Kw (@˸lɖdY; jSt'̏U ,nB Ǫ H` Q04,2[icS-1VEd"oFk=!*F-J)zsܙ:ΊۓSyOyH"cU ƛѪAc*pL|Y?z* -rP F l0ﮃxb~dhǓn,2 cW7Ysɘ0/H^$M&_NψB+C;M]܇³mϑKӇea ^>ħ.hō,4BׄξDA`wG<l5]>:Z]o2_ s5_w~ gөօn^B/tYS.{m?Ehfoe=jAa+ ` 00M#aO!ވ4<^K_L{ǏtfXeW"B5*Gzc-/ɳ3ludv hT&sH]碩Y,YzfGf2oPϴ1)ʘW+48P@fAz]ziSX(=~f5|<]-Vy(U?2qDZE.$7]f<"rd[vhp\}DgFlI$͛!_Ɓ'Eqj}@(a$D2UW( 0Qol= B^!Xf ?u@k"M[q*5E"/@D$0%j{WB"*'"oL 9 (IUȠ}2/?UGF2K9YuQ0Y-j֮%oIU{#$I k13l SAZinH-l=,+ ? e80`~)8IPL% PLnĝw?YYu\4]48sp%`NNڟǿ̓-û+~~=m,񧓧3hKD6=э:s8{r0Mkr-%h NQĭ G4giH|%g9~;6" vGUKo9tGXp3U^^aqıvWYrrlZSQ@o^2oc1`1#xZdiyy&-G}ow'ɛ$oB V<%y;|*2<[L[%W;M&8Ը:% ylqј&//p!aor8~-A.Ӯ,la6vx:GO>!mة˻2oL$(Zn2Xq}\pTt|!J~Uޔ'iٜSlZEB#lζw2E=zsL\T(ks=MKwVpJҠmk$06ъex|{u`bz4 `,F)5ɻ;@&bZPqez wD**&KB['c9m\ ?YmߤHBą{~S(\bk۪ݍ 7\A?2/4'`}g}% 2ou@P<"UcV" ߊZB쀳}ᣏlcQ 9 DkMQ7 kQNd"xaQS))HPݕ܈u 4(8918Õ][$Pp$fPNrR$vi`܊H*-P"aマ@r*!Fx .f40žտŜ.Ye2$l6prD< n7}k.ϻɳ6}\V&"Y6 0ѫ l&݊l'_&{뉦0J>mw@fZ jI_PkJD/uFw ,nҼ4| xxJA=H(vvC-+Zq}xN BfA̴+/ IE@k53+w QeRK8<8Jv+"q)lVY;&X y^kq><e*.Rw2+!~EҠlRk4q$`k44Uд]eø_i=Ǜ9ZV#&0w+%!Ui[/y5q`YLb0/t"5OE !m4Z*+*zJm3?06͠k\gKb9Ύ橴NB`lաQ2 %_L"\U^nhܾr2mT\Y4t+8};RsIuUv("R7%ޟL2 @o".u5er--US21 zoJo!u~Ux4ӭ-mQ<ȶ:Bq}UN3fJ m"4VE0Az ZlVE$ꨉb]`)E#iM-< } ʵ0-)γ mֽkP(΁Mq˷}T_RSq D6RiMI"@UHV֔ ъ=-Ba+ٻ~4"2qԴqx{F݂(VA$q%6JƁFJ30S7w"^Z0TVyyUf ŷb3U.K$SWeRb NSSôq9ۂhK{A'rzW ltoK>]OE 'PUmUmӊd+IL=ޤkiPUGU6ThPyG#Jc0C9gz΃ݣ4խiYڒOuMbj!L~͘=4Eanz7"OL3#aq\#BW#21U]{cMG8pLeO05_g뉛1ժy-z/ TCDLdӹ15ŞWCeʾ!-ϒU$=~C! Sݑ[ >DUnC*?CIDgmЁ%y[R4bmhD-u=m¶g ϐi74ѵTӥ"1\uvZ6Yjc(iwY7eZAX*]HՃwM=@`U5JA+boi9d M S\U>xmN,ի{B`UVe^ƻm=]~BTUPH#]a/A,eOXg]\eX 1YDޚ4ɻt& HUu?k:I꽙a`C,2mliwhaoIսlbUMzF׳RHW$rh"Y]ʛ!jù΋<7(c*.|0ʘmO$_9^9Z%;v ~&XXk-ȵUNiꚩLWJ}kfw?2ù4lAu341߾P|9U[&Fg1p<t% s'/s]ٻjgOf1Љsؖ2|bً w# w/^t9gCNdw^y]'[{2߻ {q\'Fv =Ea,Î(l@__ƫ ;x{1&ߣ `5-6 lB;<~NOㆥ0Sa{5ʝGa|TP3n~D 冷{v) w1nY ';VanIȕ5 @8T_IH!x.*g0f>w5mX_pf;wD @rq$IvLP$eD-)e{IAJ7mwk="&a0 {jh;.G~/O m8k$J>|+y'䮿^ oxK6x 2 ޫ|߶N괺Rf7j?N%?ӟ_&a_ǹ \E#X6^N`1EM<7aEHa>X@ L#O?Rz9?^4Gə&>! "^yy ǘΔ={Dz}eD!zb,a>g 9r#(֫(Dy}y=b|:73o7q/ΌvggΙϩ j|{XdgxAվo@ovՐr˧;*mG+7viz%(Dwz 9I!ˋ 47^]: =v'q3%; ' tC/Gu0tTAx>wp $[d+W [F,_i$*Jk:sv^Qyrs!-m;A N)u AfCSA QN|E,6" &0]1$uXzT wzؖ4:0R,G+R^jFXZG-e L ]N#eY"l,IXkd 2+4 x^D%u!:G *tsvKq^qYL5!Ȝa v87ۉ丝(Rlj HP?avR:i2\Kj6dQ͕ f /$+x+ Gjv%Nl"}<^ι$x/lmǵ]F9^K8]m u8&\^g\8+nٹCƩǐ"Q\@.Fs j6pC @nl&J&Y=h-JI hwӵڞuw0O׈]av#,fey`5i^!\/՞,I_.p]\e I٬8k`#\rDNE>ZWżCf0Buq<|Q <Hzc~|ъH %H  >T!pأ/K8TYjIq =XUquc(7p|89S:ҝ#xvfn"R9X7lT yM;ٗ [oNywZaM+_`ٵ^l"ATCsիF@l)xр<@sօ )ß&&& & РmX.PkG&Vyћx໔N[dzO:s`"Dd6.jDbiq%0?./SD}şrh. ޺VXN06(^g#X$YBΥ @%qJķ0Hm dlH{Έg= ʎqYlǞlxl(C ,aiB 8CTS-#<_dD& \`9!1'x45@z+ ݰU7\xN{ ^0/L鑛5歺!~!B!zPʤI9Ν~gDyM󀇾sw &} '|`Pj>R6xn*Ɋ8ՕC* :j4"U.͊6ؘ%D\MQig_p)q yYH$pEXFm*lz*(]Fexj;BUp/U0YE%A$X$,q"i'ݭ|ڋ+{ Ϡ8 ȏĻw  {:_k#?!Bp{Ǫe8P. D0Nv[;٭]w .) Vodmm!zܐlQ`k KuVwe?P[\a-(@l K[E*lr\ưoM-tPtwB{Jw!tcH.cJTQ+i\7vgh;{.S@;Hu݁PoPoB G'uB e#I(ہPPB G'uB #I(߁PN؁P;q= ;@h; $4؁ppB Gh'BWtka _/H-73"}XCh-MtyfLuybpe:M# !7X.HrYpTR)wIv-M⨰JZf1e`_="'Mefd 3t\,+UM3y˼lE^eޝFAhPv\}!8 }r+yp~dT׾YY"HEK.8EHXj]@,YB",şe^eV S-ǕũfjT,Q*]!"jNgOF50 4Mb+7%)Fgɂb`(""hR|  r$IWCT<Ɇ ϠRZPCie״B޽9GxnEv/V#RR@ ݹ%eCd?ܖRF@xXq6} H0O-l/'0uWY& eE}o5N?@_xa2pj_<`Κ,zV4[N&ad\ǙB*fe< f|MR>0%'leVW{%spM4A+f&^UxO'KI¿ Wl0ø%8Cϯ8y^ M>;˒IOU?^lKe23A1q\@X/$qx$o= ^X`!BO9dJt3杌r.g.`#vZ8Cvu` spR[6 U-^54Oi9!o@[^ˈ;ɪ7Y}6CsǐL/^=K~;+fD]8 'Gs0V6Cz `k[ZW͘S2A|oT}`A,Pb|7I rU*3,"|K3uʛĬXsyx˃ïF0/zf|ю2sJ͉ \άG\JH n)'1T*&Ϥ=+5T$"F=߇Y躴X.V"2D^drk2+|2TR%񊰚UfʓO2$he@kL(J8H4ksé8(lPW Q ],lS&Gxr *~m|)~BRGg'&TAwPQl V84NdW+"Kyז:}VK H{:Zyw͍36_~Cm d֕ʹr-b6]6ƴ9o4m!{Rb Rӊ<sӤ=cPRnqm;J偤vfݔ9meL[%H<^$?kG *P6dVѯ[:~!#@hKT7!pƦugȻ޶#_z,b&$4$- ߷]nuri ؞L<>:*fu 5׿|칄WϏO?gM_>^{/%n>^moOwE%NV>N???>q]uNono`}棸F8_\]Mߣ[ݿuQoߦs2e_kO7+8GWQN~L!ƅh>nHƾi]zl ?LYO?2cVf tP%] +ؠ3yI$8Ί$#p䡶lb%%˴ԁb ɻ/EzX"<# g[eR$ iD)& ,z}"qֆ3G}Fg:bM'p|&:ܤjX3aȒF{Sw0Nߦ<7g#p*s^%b{648b,3L{!]4F[ҘsM1t|/7$8ڱХ{e5ճ(8@9 ' c-K oWXF/)>(K,ИHĪ&X4a "U&jcK97V]hfK'.&I Ki2WZ+YKH˚E]H_ ;$GجHJ`V6FfrqR3KQe0ׁa l- yáyQL44#҇NaT)n]}Q )+FpX:٭'uXT~p .$Ύ}y.=07]YZb3trW/{ݲR53vJ¥ 5yA+x:hʶO#,QÃ3^{+,T i4F,w1N|K~YQA\8#sł !OEly~S7$8ڌVU$ADddVpg51fsM7wdͩ*dj_IM,eD 9jt})VDdWǴuF'805&ۂT!bK?7 # 7 W"W#!Cc v`N6 Ho%cXΏM}s3)[BO) C Øau#xJ<nExHYSptr ,PrFpcȾg=QM*&C ^٭:^2 WVXb]vw[)WlI- F#seӕ(>q^K#Y4@rk=Bt@ܣ W|0;<`J'9|VG;֏Vk{9J‘mKpp4\/Ip-b cK'G.-͕&<-É\M1cb:[btRQ-ci@d'5vS!(HDP豏iոz׿uH6Z+ĵa 3˾O=]{!ИE͂aLKI7R4 /*ڍHEgcөR-oJcֹ?x%1B< ,AS#1Mliex\%\dUeڜV~\ LIJ~`R3-S4 `(w%Qz8 \f ]cKfpk4W)$<>)X ٳVRZf -8D#r09wfVχX98+c-jKlp[6y0NL>9[TہȠۨ L`FTu{ǽk$E'U"jdYiQMn}·Ř WUn9[L&c=iбn}[];p):$؆ZAj( H % %6b,}洱z v:k nH?-S"kV2LY6$>H@[ӒyK5fS2ub,t?g sL};xJr}ko2 cGa5vQcx#\Ěz6H tN:6`Q@洮L(<2plsl+!ߞJu;!QF)Ǹˢ|5h$/5s+|57Q@(RaMR1/ƕuD̸F`'[Bq9FIU2>bW{, #]b|b1*}[S[?9ԫ[u;iqD "&jsfd] d%ϫ-+SJ8/ZvLиpXVM~ta{fún)3kxU K^#'xp l-'  ?)n>n%z\}r?Ѵs662U +th|[>x'| ȖE!}tY(78f0U4<~^ޒ: *l jp/ٹ;G1{(3abs#)ՠ6O0ô,/jmK>#{wI݄!qXbD\"U{撏%~*pE8@ g$ !2byD~rwIpu[w%H98 dD{B73@42\K`06wkE;xK{<.>YN2Q8%2%ػ߶lݵy?[Nlk$00^GRoRH#Ҕ,؎H͜w]L{'q7'ہ'}>^urc(,+6t|u2FQ)UseK =r~ _aK5b6q}gS= h;pYR~%\j(1'ُD}0_ ^#>*^9;z#eb0fsu _?.678Qt^zDZIchv BSV7#^nώÁmV?Srp ;mc4iV\Y5}~5ȧyc`?9{}.ޅZr~J6ߣ,Qx jdgl~S)3R0w\U]*]n>sg3Nf;;zu5g|nJ"H"WJcף|9O" Fh)W2l#,yqzᨁVSFnwKQ3׮? ^?| 1?y1W^{?څ8xG 6MޞW 8i6^6sf.zw i>~4ݛޏe1[s9Gy[4O&Gj:M.vt]Q`rF#>-A4;G0x@:F)hjFULt }] 78Z j@c^3ep\Ly,S<5Ww[ ۆ}`~9[UaD)I@V` w.nAu( g㉍ 0"r3Μg NF108W{{ ܙf1hRj&=% .C(K S>F 6ahA/4AHX>>ܼˉHď`J, >Gx4^7mF.[h-Aܻ _,s1v>hu\yt]>E[ `ȕ4edtXr╖{$ W7\w F8KCpn41b #s#.w/b͢o.34BT_KvF_rp:.pg`NM/ fW/&znXmveӾ+r:.T,˟j**mẅ&'?=n.J2PMb &DolAńWs͸I4-^Wl&-b ]afbB Bڠ2n"[TlDT)ܶp XkX!PA"&mMڠZ^HKiCbZHi1"hDGDy 3E[51EcsZ4e Ug;Tl$n6&m '=<@8RZ9m6pA[ Z+@#p洁"ܞ$$a#y|U):i ;M5NR>{LJ&LE .QZ*LQO?L?)]r>xɵl_Ȫ.^͗k.˨i(*臠1p.6^a?+ܲ"AӚ۶/;γ3M ްwZV}R(Aa܅_'n]i,P42% B[ReOnw:J_p+"tIn 0=Iq>"Z4όZ&HHOV w<tɺ %j[7:loH/cWOg&n1^3xdz4KTX*hR4ORIP/ -ƘVOef̷%74EYʲؤdyO|mM'JOk{6?^="o&:M;I J{k%𭃎voaÆE#6`)cAw|s6.|J.RYCw=$#YKD]!ź,шI(V TX.9{N%k Kk$z{I;g%(fi U U~z߷UݟymT59&2pf=d-\-~HQ bvPd3u'.w/?HR yHV0i`-䰪={>z 6 򹯁qMkU?'V}ޘ<0 %[05؅ .pm{6X^M>dma߲ mFo=0Da1O>xyaסk WK;wcgˍI2Èzlgk]~Z3 Hd,Ś X$8K2DfBظCD,).C}ut,w6_Fnfx3Оy PpM}ŕ7z/#OW@3 d{OBHt3F̦2 H3qI+bd0T,2.lehZQ#hZy͠і֌u9P3&ЎD$h2Q`j3. f4f4f0EhƺΡf3#0j"(*3jcTFrp885iE4i 4C`?~ƺ%͠D(IRKDPL`NS:V9(*1 KR!f4f47p7+ { j ,ٮ3;#m1*f4( ?f vr(z`^ۋ/[Pi+UhVw.#g9+nɋxÓK )zs@.斡ZWi*FڰL/ν{ FrJju5e`op֞$XFs] Msu\\(:\n Lk*N> O ٷ)=yQvF;^砇S/g}/3h7In1W/CƇFQEXD+qo80=\@ oMȷ^i,JIDTOW P(&7eaf"1R'%ֽ~+KF5ҫFa|\_?y+* 5O1ds<rs<ĕds<mqWB' Zc0(综dVCiюRX(wykÐ A*oPKc8wLm?.(A{Y8Yoz+j8J/ ?~v`2,Mʺ(\v?˟__ iQmfvl~g>HnL7ln cˣ{w\ӫxOC?-Tn8e6"[u@/f< :h-t[J0Spmg7\uB.>˜@uOPr=vΎs;B!ڃ1g >^*=ABsrȿUoQSfۣg(DE+ԁL +KqpTgHBa̢ÔHw~z0YC&7%B1ĎeuĔBdz3tY=q9mo1]V@NU62X}~ZJn֊7xXHpa-v[V4%$B-j_: 8pˆ`{ MBXLgA 3bThDsNS.aY­4wT({xŞ,^P{=(3114SF$ZQRi(8eZk$4Zs 2G.;5i;bMp^4FH:b~]T8'.z:&|v2P3! RBFJۀ5(Xu&%Ƙ'o4ͻ1MO R@LmKLA\[ ---ۤ i 6kϫX.3"DAC'iAb"lRa0J2m4ʀQqJ-xX}x~i"iAx*B1_{|"[{2a)¤RgjOq8S,FB+!6E0I̵bCϳmkm;\JQGLNhqpScNUVu0G*;S2A .HY᎐fw֧zdc.YO!wTR0kI,81DB,K))c pvEI|ZANrӀI5qmui׀b,!V1,7ivP]PVzIۺwpR5ɪ(&8͡J'%C +Z'sTlTFKEN3MxD"u*T΂I,`hI1T(v[<n II Vp]oƒWph,ig/qy"C护EHbI.]dOYR)[lö$|\qd+yFQ\2G2.$Ɩ$dF9FLpawuMϼA&&LL}[3EF,LJh*n:Վ}2!0g2kF:dgiKIƯ}["`^;yh4  XBi0\ڥ%h%MF?o7#on1i5+d64ݡzuBXqTƠcƌI*529<,1$T T3D$ibX'<,$MMI$KYJo$(pd ')*7429r+R#D*y#fBs7@% 3<77(ôz{HaG;>ӆ֑z?j`^?4eFPߴp߅CL٘(f~?9E'"cH Q*6Dʘ$,UZ+M&o9T! "eGiTsyZ.SXݝ KMiJʁWr:!$L1ЙNyJu g2ç%(j2FVs^)x0Z52BD2YEH.4:\3JS,L2S`۰`>ui&,s,wc(i<F ڔIexش#)j{ۃ2lf8j5Cabv[نI81.ɒTD0f D /ɄVmZ}+3bkؠj6/;m!Z6A7zNĄLZ?kw1U H$M4Jt w-;p\$j~Q"Ȣ8 i™6Cw9΁Th+1>\HuXh?1\.*E궥 chtauׅ]ap q{qcog<[Ҕ),*]kGq$*uzf~sNuA RM+N 0(M*v;{Ij`0(*=K%EfДifc!7O'B S,5r.ؾ|(#͓'ћyv ސ|LQmɝj X&ԢUj_I,2f{s97F|0Z O b  e !/unܷO)4%tmʙI˒p3@{{)J'`{aY׫apycZ)& Jf5$*·4ۺ΁h"-kq,L1eXQ<g~_9.W޹-~=K"?Of {tMpK\2˫bU,Sgu+6)Ԥ9:}_0z=M^-#X <;~SW8bߖr9:9sgQ6nϾ#|3W\a8R&0!eDf$u:+fT.I!I(9rp>)06P}+ bl8׵D:@8*!-Ds9YBm3Cz@R.*9d .!J;dID^kU)TEmX%A;-NdI.,YTaA]hra|s*&(Nq_5ȭG~yv@YLBHNkkOxuV*D+H2c^ ;]O]o& J̥Bݠ-RnP%ŅLCeCHT8F45`}hZ}*G]5Ggg;HWT!09נ`v"X mpEq)!(V4$6Q8r]fCLu.Ԓ &Fd<`)9[-Ҡh¨ΔV] `ts4ϪA0a) >cXӮLܮ h+m@jCz$sk@3 kY,m8Snݏ]AXFXp,?HθƧ(5z ?`D_{hNn1#}Mr`յIkD*4Dkp*^ z#V 1r+h~ 閪ǩ"ШbNOt75vhV4Q;W ~SKaroyREUSy @(3tah FSxx@Pw_kzOIJvG|U'⮌(b6M/kG.{q *\ (M|uv}. `|L&ԤĠ9khyEoÐ1fTͫbeo٫2,^#\(Yd/ی]GJTZ6GCvA2[,'k\q>[UW}p1t;w)@Tǜ q)9: 鼸<[0{VgV3jŭԻhI]ݼ99[ 9-Vqy:U֩ǼtQ_ ^Y=Fc'GުRe/OkJO^&uZ6$?>yV|\sFQ"zlAw8-H8oԭmG5tCDͦONhZ0Nݛ7.gR˹}V~fG|l5'կs?ǚB}`N _ Adw?WG4oN/G(S ;:> {):Ya˟G 񳿯e_PzPbLDcZ\v_.qؼX57&fEW,_ tVT<-Gҳ?ی)UMC)x0@10%k.bBR%: [![S[aQlH SUae?[Iz\K"Q.}Qhm8ae?%ndb%$$f|,V#wiBT H;NbJ '8ن"6sAzI:*)H9@㊎@9V'PqD XT$Э,j9=9sׯYG~2F(FR_-^h ΧO']ƃU퇼7y^y }X̊g/2WW j&^e,Ѫ&*kS٪$jy܍ CFK7N+et]ac@VaEnp/;U6-f'.lFpR<#18 e$J~ϻdc{pecBo$S$<=ur&I/iU٧yOO&[lNmN>߾~XV/?qVO#|gilӴw>Gw}#\t$*Cf}_5.GZ$)ft}';od@{B*Vq)`8jy6ĭF MGY ;.';],2P+J9_Mg0,Y_EywZg([d-Bx\!\E ]uOזж!FjӨ =?,ڣS [7x!o|r%*В4=矯*8*+}yj%v-^#?+\TO~*ŪjӨR~r^Aڞ=YD~zdΧ^Fd}g'*zB(3h57&(8#ȷZSt VUJy"Q}%n!xCHm#"w\ #<q GH JIin5ʴ֥yEGmYȲDrer2b%#n#JyRUŤMB2%~gU(FyGE=v.W툨:D\(%r S+zRrc%WA9yV/ΈO;u/%Ϲzت[<@OjvjRN80x|)$c=_CՆ \DŽ2 1{}MVe$#[O˜{O}@jfr-9}qJF`S?4O|~n헛Zْ9A_^SE폃Gov5gw?N釖ST.ȓ_9 j6]%>wS:7]HoVo(QS2 ދm :2er"g184I(c:9I`C<|v+75ڝbvkP´ńik\^-g_wsǞEjȐnM_-R= rxv`֝ݲZw8ɤr{U>X?Ž fSk$%axq¸ ђr*\jV[N_vN e 'h>Ac0~D `ڿ${S:“&rh>փ?t2gQqVSK=/ck-=*`d*'^)$;Ep+KP93յBuW0 8IHoMM(!T`+)g(h2hV^v_+=dN+EP9dPʬ{e1Xk,7iäe #yf 䰅!ԃaqXz8Bdx1WV 9Q˭)D/*1S.yUg`ákh 2$ ,$זEXLp$s,i*T$klFk,5^k; zK;ꤥuœ=~1rOܺp3n ۼ[C[ggWoyo44/HDd~6'+<=\s ~q"'hUvڽgc(6ţ+9xj7 U\qUjB"Oe2Ui;ܮw'Jڮ2& w|:X Ѹ Sޞ*֯>u׀|(gs*&֖jeRzDz.ɔaMdJW%}j T~?ݬL:3Mv\-ojE\msA{u" #15ǭ:#<ǸY-w~gBǿ-]k l^ݾtՎ&wWw67z{N֞؋D3 F/qe#L^rL{uLmn?E8k;F`IwWzo߈L0-))8+y LAy↏D{/ %;Xdk7rAJB|N; tQHQ:Pȉʀ,bQ2`HwnطioԳ> ~QD$r֬f$HFeH!TchtD>V~C-rg\=Ixt9wFބX?)^8bKq^' t q ݪɄ@6%Cvx/Yf!җLjl,lK \ĎpqG Nϝ. ) $ tM[r z*fRGczb{ڝYׄg, fO\QM*`fB.TA +r5TqmW%yx`@;Mgia\b"q9FGcEn; S.n%u$T53QH"%$9K k qMgE6wX`[]}XĭJVKZ bVk̈́Q.-C<|ͤv-^Av(4ݡ;g%1{x?Qt9\ˋ<>EΒfpA:R:FtɆf'18C#r"wӡC],x(@H&`c.ʂl Wt1: )l/ߪu뷔Z 804P T^Wzx' 5|t$1X5m^n8@S\LD <MO5|t M>!@h}q<9%h??X )d8Y~!oo>VHtL9zvdhSҴȨUq 1l>mt?Ux<Zj:sf l_kZ]s6䥕F+x] Hd;xoI-ёcuڵ.QGJW ~e=xK~=IRZ:Ί$N G d5|ԅNSHIUEu%qy\zt3OPJW)mCc 4ЯkWE X=^zF9N7!yc߀3070QcJog9ʇcnl"ZQ:&%[Q2XpD$F*jXJs1=^˜3=VrȏE :BB-R2u%{m A8C + I>1VfۛMnTHoR kCr~% K,2˲|JɖpR t,jT 3΢g\NKX4bA-[nA 9pZ ӛ}$3a|Bw\\6(RJНq62`j,b{R^I*H WgBc.I͋"鷖$DגD +)3xQ]N|}3lĄg\ΣMIDFxjS U2'%O3k{g4[#RUQ#sHbcIu)E0\!,*Гڅfc;105h2MhIrZeIyeRҟIDL;* +U)'o08]}H['i|q9"]Z09=1gwЀ~ITJp 6 8g,}QӾ&rq`uD>Vz^%E("MsYЩPK;q1{𡆏W q \is;#/ŀ/ K,1^޾Gn05$y jw$tI*Iu)rhl>RSzQiΚFJ`Rx\$&gŋ[ԣ .ҕ⵴M~R.' irDQI_w.q߼SÀ4SbJلPSp7CGPS_2QzfhX .ooRm8Xef"{Zіo"ht7QNJ "ȌC83l(!̄dtdj9{x̸S =v_ [t-|KX4J8.l"45f A[7N,lX&c33>R]XEb2{|KKt:]dǣMG覍Azqj.G&WW'aoa>PtpԂ'6@JfL|zo65E65E )r7a<%ǣKR) %&Ou3pxɳG"fbWc{vQ3H:a7}hhPF+OY#Kh`OG+yGV fXx7ݐ @tn:&?3T%S"FP1b&u>rfg& pK>a/e5>O h H>W:tϸڡ~rMzq*zx̸"Jj7]q)Ṫ>C|L~KB{E#q#T(8<2v 3_`<'0CqFQS=3B ~aBuDdu.ZDxͣgQ~FwRڋCyxg^]'+&|@ v~r2+f:skx`퀯X"=a#=l2Hnr Jq[v"HzؖVhUtY"1b52 vZ~~RYպVa3)՘K/<~(lGxDu.W9 ٗP=KeV*6b(1JpN t3Un=,:3gha 79;:9j~]Gkm֋-sؓX] ,_3~ J??ڤ̛ׯ0~>'Fj\|7,J$L׬rEFK[<1rW&w9,ӢOqMH2̮9O7ښ( BG2>vHȇp;Fspe9Ƒ8aoWG)CRI{Slq5',5(3m ISTqF(w,(QZ `DiL!Qgk)Qj\tͭזD8IPWǏ+K };ҝBHma]1XGB"Cl9I,]`[g^bgs;^| na\+a@ͩM\u΃YmOMzdQiAR+-p҂L*G8E֩ęd%eiLUH9~4 D%DF>!6<@Ϙ7y@34NZ#.$;RT7e7g`GA(B 4 yr/`  wOcK- &I{c̢׆9KwR74V(DGFrCPfM;|[ve Xܻ=qM5c^YTP*.W^dH2kTۖP5ssfs8 hONdmMfF F$q|X+CW-!kDآ="]06XXO!<7k;Ww8no`=/oѐFp/҃m<X@@ [;F[)E83fiyI}!qn#J#.j-%QzAP3f\ s R ~x1L}[М!(xq!I,= LrJ>ɰ?$iTPq#+,wOc5KX":̉3%ZBWzM"!*rSHTa*X2Z))be%d2UB R*/eLuVZ=+O]}FEY\ #>X<ݤi^`"+Ȩ9IyUi $oTX,MOmRgէ!BX xXN`\qs | I̝P#%fVm%NBo_ [ԗJ!_V_6 -S3eiĉCyiIB Q$uiՁWT6'jcsBӘ3{),7_YσVJABzoL\kHpf$ȟbV7{^B1mQIéJPMǦ'c,J>HQO6l}:_Nrf{[B= Ŕ99337%D%/@L"G0$ qc (M0eק^> B"\ $Ojsҙ}Vѐ5,;Ҋ2pLΜb,(HyɈJe]&xqU4^ .r,.M3c>2!$L2bpLeP$Yq `pe1I$U/kܨz,1J  J_n`T@O[й@J@U%^촆!1Ю%3(9(&^8SF]>٤|;` }"^-E0t Әd *,JX/x'5#EH^Q8 `^VJ~t,& >iL5uȜDkغnI)3F$ Fw?ݙdȺ}mzhT!8Za[R*%#4S$TQD˗8Cf{@seͧ=S mm- [_clܦjGf`;=9%8oYt"y,%U2uQr`7:{?]^ _CfooTֻe#ǾSH'*SQ<{oУɓ/o/Ƒ7_˴ EFk94At֩n,17mg\M+үT?lҟI|_6,+\~M@ }6yt:Ɲ_hevǭYO>.4S0A լvf.c~1I xZzQ?ӯ5m۬;X&ͳ|)_&&patzu\ E|hWvڕE=3 uL$6iMHyGC2[ũ-(X,MxiF*&q}|]=/2?yljg>,Oi7yaZBrܹ-zZu]ԜrWb]6zKxE2Eulv8\h:7z|gE_֟ΆnߟvQ긵i-&V_8ǿ#Ѧt;eY/S}PAg\d 8?Fɻ樮Wr!U'v_}d-w˝V9~}4uwݳkh m] ;%p!{l)~3}uxa 8*Αr[wmV*p7֦G!1s;: B]",Sw//o8ny܍T(/>w##;O;6[wQE<.NA=vFc;A5 ?TSQ08$[Bg_%ٌ ĕkw#k2 A=4*eZfyb(d?[w@CT̆ h1>|<ڮG?x =z\" 3 |(%S aP ڊ-V }G+k*^ F+Ә"Z:erb%¶%NbbD)X)"i| S~ m9W{mFP6xVqb/X7u/jaC-lod3`⊿-wDܮ^ͭ~@~@mP@9N ȼrAƷU5|(p˷rVeZw6?u \\(J"-JDZ{7iucYL *(` nzh^G\c yωҿp1 ,$VⲖΑF~oen9; '4Jm|D_wnW4U]i^@F>lVT :ye*a*(ş_Ns(C>97(xzIXR*^O-_[BD`n 𹱢[Lgj~»Z`k2!*L qȶS[ sPK?OU\|j SbY\hk]@@ƶN"P~/ϝ)Еkt(\Nc :άR ?Uoopkfї>-:mbD!8$b_Sý[nIm%8+rXD%3ո4?,TNIH@[H+BƵ 1K_a$bD侪)A: )2A)=y-rW$h@$w6̷Ҧ&Qpr@YTt̢#k[f޾04Vxb,hvX%|(w6,z؎axM\xs& R/s\6u[>(`#u8$ۭ; _ʆE&Pr$551"IHs![Ldx8t\޽r9py V8A}ۈk0m2Wql5#K^+dV1x8Z193ZHQwBI&QG?P@>f [ O%E0KP̭V,dI$M/ Ƞv;|(SvnCs3bz6NZGupw˻uu+A-.A J@F+F(}KS [o(2ײxF<,`PMa9[>cw7'sEsjѝ{AG1c2OMb ˰uFeB}&1D7~*xeGTo@?_[ ]#Ehj#K4L Jgk7lWݡO/.6*0Eڳ 3C}g<@A DNl0UHI̩O21UǙgqg}33g~[t8Ytw1 U͍~f TCm eXj1tWM|{fOf,wMI,%7B)w?0& (Y‰f _+(eoݸd-C F([0Dw.Yhdh`Q9oN(e@W OFLxl /XÍOMGk"dj񇒻?ஶ F.cVЍm[1{X]%ykZlqwTQ.:\y(Oc}?pdWA;81ʓ!u'(BD$pD)O@ʄGzSs%t-Q F(`^Z1Drh`+=uZr l 0Ok0mJH2!4ɟ斆}ۄ LPG8 4X}_M-+Ө*+.jD>u!P=0;116 wRT+'n.}x!xI{AHk}B&8P;'Xj坖,d| ۞4 (sbn`hG^4|A:C8*aRL}їZj"KhE e+{սhmYL rn:37[jap4CZk aT ǠJ yp9zkNkM,~/ZPAUdad'yRڝ}yAjb;ȟ/Ν|Qp'3`N}9Au:qD,^YJ5'iF}£ !17#NP0]=_.U9I^:iynv2p9\wD zFkc`dR.FZ\sn.d9R0K>Rt/USZ!ƜHrS{!U(eZfd1Zvܙ.L-4؍Pν1hxQYeV SF>K;(F 18Y`AQƾ$C'x$R30 ;W&1rcB`'vR07.*1Np aI,TQu.{h7E׈u*2p8B8 ,Z$p9$qd(F (WЫ=1XVd !Q%9*1~smNTJf(SjKԫ6Mߝscf1 HdW J[W)-lmJ FsAFwuΨKİrAnfMZ& "/"/#Kݬ\@w\7ԭ2 )Jl7$kݚG=hzbk {E{tCr+ƒ_Rk[}w˖o\c_{c9I懥Qd`i ۖJCkȐj=rr=x\ Lc[K:p8Nqw0"(cj0IjC?!QBRb.x͇=}_^3έe=Gde:cC4a&9LH;KR(*-vyfC )bPGsw:\1S𸇱D1f$2%) 4 :4kB*v@ \ڏs oaB%)ΜihVm'c J^cBaCYCET(ڤQ,BjS`pt{!/N(4\c{S6u [Js[Tw0Q}^{}=|<SOܝ0 z0 ]޷P0ȚSU[W&Q&fYD):NZfk٭íjmP {sh:*M۽[S Mڊ) Ȧyy S,_.^煎m2[~瓩]y&__es]$~MҶd9N?T pə*;1üO|,؟H0soOyjr>ұ?JinfFOf ofB/*,0dŤ} }_v]Iϯo}^r=]|߷0ħ$?&gUO`.݃Ӭy&iq1QO7.f-,{;+}e7 c jfGr0ӝ$e:y3Ƨ~{Ps3Z I,' k/F!(  dB2Qqf< 9 H)d,%d( c]CN%޴y?Gia+{-߬< ҳ&W{xoʧ_&W ~SSkJPojہge>]T[T׼yC}AiYjkS$m:͝W+f4[ ,ܫ]\qU0}U˴ôjM fizH>sP" [$2<%PDS^d>*~|`lAE׬[k *TEC=-`Ge5\JX*1TJ&Eb־OGqhʄ4f3uLKz65MCJ3f34c[6DHJD`az2+o:խíN 8SC pFFॳ$_#[ZHrL&x`hL*[_< q4 $f"V8yȿ|;Nw;".jU%e|o] ~yuP4RR)DT�i[ې+qJ)M/12^JF#OʈҐ&D {;/H'2Vᦁ_v^U y`t 4ɄruuvLBL$ #4G:£u 2kDg%S> paeS05/ؿ6وl~~Z}TJ%kb\ܞ6օ%e8l# #$ 2دbH'&MoKpJe>|彝_>{n3 `c P\Q2ه_>_9`EaȽK/@WQĞEPvWk~ƪލAx-7QJoU{Le8m/mLƋ›e&bvjI)gXOmL63\ M<?ɏ܎#1@XE|.Ϥy`Wmk8Qkg3eٴga˾"/Dyp.܋\ R &+"jT8lh/8m5_?_ r(~XLKh灜d Dp1b@3Cf'94u}Ź:΃6 !0P̣N\i؍14|dT'ҝ^"@Ґs*-+gɒ@营~y/{x\v%3(򮭷d9a} A<Ȟ}:Y7[0MiE7`zx!r69骮*´<ζ+Ξ2_'IgK6TC”h,^~]峽0|>$M}hשLmif3fTcƫbɳY/Ews7u볜if8{aQS$S*'A `_';LE\>ʡB-z0 6R:$%_}x9߻wwB. ),K@iT0t7bsXxz錝E(5#ܛlBww ˼2ZNihc5Y4jy=֜ {>rNP;B$5OgGWov`qţ{1!KkQ=jVw9_7}uI}w//yrY{I&zݷڝ_Cw~pUF >/7|w%l.ݬӇFPl[Lm9ޖ[\Qd[j:uۛYT5Gs!;#S]UEuE3%_oA5dPa2C2|^=]G>85Ť>H:n{볥~E[(P?1PBtZ…wЛ]Bʧ?s]ǘ'^ΉG }a݇BWa1+z>V:luQɒLVތ#_u7bϡQ-wTHH~bѶ36$Yu{o̗VW5:fFG]&0xpu@K~@82(}pmW ]bg rc005n8&x`gܰI k Ȁ1d-_yU*}hSR`0-҅aaAGS@^<DzZ{[#&@܊`m]GDŽs"Ra'JbQH9m)xYI# KVSr[oyo;8`|LWNmE+p-nYx0R3]J1g^Ls@ KwK}}#T ^H&V^9ihe Nat-+zG[V}d(NjЯzT>7ne_ hߚ:/4H1]Y~`2%a1s&~і]:_?POAwe/Շk>ܧݯ.Xޕwr7?sV5Um%i/qwK~!)M/^Wmxq04LQ٬%ίk^,:/o7ӷlqyzĝ.7˗/r/n5\-IEStsq˖p9/iGK-̇*|3 cT1~’Ǡ0aw|jͰXk?}k яi'wfsԩe0ny]iv^r(s_[z_7ӋS$!L&܍p2+f8~Sſ"}, &/.>^-^9\^!mPnGG0yRkYrV^Z{羽9ӽSMZvx ս9[W ,_.wAm؝HV"ǷeVŐA;|q-6/|pR+678W;'`qze+lZWz/#_-׏AkbsZCs+skM뀉bK`7OxzϾ7+ YǔQvhK }?ZkmF/ Mvv)n[,)l-J-[jmL#1cwXͯbnw[̓gu3VQnU[?juʢxt.G<)4ǟk vH{͚ o eqoG7w9].|͑ѓ^JU_s _ᴷ>_'%#MZ}f*i BDO/^w/[5؁*S1vK#AUX!s^ ^f6e}%$**wɋaK1 Z-P9xW8W *x9'`DlA6Id KF|q~2Eg"R^ =fڧz#sn}m?ڒw}#%u!e}j@p`0nhCt|9ч}s|N*sj>3dx*z1S1cN{M$ڝݱW>bpg ωmB7h0ֺ:JJ\^ wl r^ܐmmk15RwZpJQny*'kޙ|'/EFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfYddEFfY䟛YD0M젍=ڈ.Y!yY=:\9Sg? /wxɇҪycp$^Z aP5? Uu׉Z:fLyş^ⅅ?dioRwt&Gy㸨>#:lR;cvɱM-T]_Xs#NQ4C2pkhd)o}M2“ݎ$P U'vlsL A !A{S{ނK{1!T6e:82 t@ڇ[usH)le:xWTeA?$`Vp$%^@mи W-j#WE&~g|C'4J1h=C \(;R@HUIeM݁Pp\5;09{I"ZrQUE0mJdAC;%Z11'1ºÕQ-jm~zkn3|ԞY1M!j $5Exs$ZR~e\n=N֓>ƉdLFҠ]B21<I7ne xQWa9Ya:=2¢Iy5{̍ ܸs@S 'yYPJXԇ;)aί6mZ2<{=2qc^R<:fl}ɀ}5ze@+i֋ u~n8hNv?KK7KYԊe-2~zZҬ2i/q u3%uaؔ ]zyNe|WNFRy2sڍJ"@kXp`ub2?[XR>3AOͯ=rOuU僓uYQ~j6d4| &BI8HOcV?5,(tCd#s6d5VMNEتU* eTi_\?bdJ2 rXR~mG~N$CFephWP{NQye e9;?m9qĭp-ONJml D'@U-ik^u3w:5imJ"LH @us3RƻeƓIMb,WY?^8~T0D1]h3CFeply溇)pCFUp(![E[Cx2*C9[nMDIq~5VxhL7߆=LNXhu[,agMBbD}o1}T+CFmpϳ atm=@V+k5 jJd+28v7Õ.\!28%8 W$ uPbw>c*B9"/Y vPfY^ PPhZ==2$cgd3*~E:!282٧_[c0Hg:d|_o=R0j:dGů7>1 sJl~2s)9mF [ɺd7f}=nHv >`h n]A|ǽ?CFepʤU rc>|vd>;L֟_ 둵wN%1F_79̦ɨ eEy_Kz^[Fϖ:prƲY R5FD^.F؍Yy򲼙J=ٌ>;4 9ǁ^ KӇqև- ;dT4daC:K͈C63"/k[(<~jf[ƨlE5/.Vw\jy6=L6/p&vI~ts3R6bj\,16J/n xb_kC#'||CVדv |zWIƲr%)S<~-#=jڣSQ~'JEί&:U-F *ԭ\҉JL4fww`h&xRV|+Z}w3=}վ74G'FiهMŁAxϯ<5[il7]ɖp@{ְKu'Y~}uwdc7by gOpE]=?y^.ON (QÉżxCl6r3%$UbA/|?W 2Jʄ2ilh A~t)E2io.tY6|?_oER\[@C?VI{[״f7m?LKV\CN5Y(+K0^HLrJVi#(,6r=P |P(1y)Tv9EqcyQo5ixxқ].DB5WQ=3'rKre1^iM R`ܐZVh " #2VIQKaԭHjhI^i1AǨ*2?RRk撔vY_MQV,N@Rzm-rF9TKCݩ3tYlf,cSIh30fE̍NYV&,j-'"|=A-@ѨMuZz,B똔21va^BM@;"%SjX*XΐO][vTZ <0fg+jcA裃'pb̈"GÒlw,g*[줇 Z`%c"BJ&fW5Apv\S)|MXIChVaRZE'lEK9''ТmEE D'+Nka`.5`;k?V(T !j@yU+<,T~ m27XՄ.>aICA#.LnhZoT+TR kk(* aꬃ7,q|E !~mwZ Z0*ME01!m'-lnh+UASŢʕLdFv4^k\F!= AAT\}d HSᷜ2E@,Y ˅ zXDk,@B]Q[<,P +Ԝ 504[`A j뵅@ mGfk9clV"!7[SRWXY74a+?% 1WM5>L tfM0yK@Z4 ` I& pvLVj cBMt&+%\92|\`5PoY#84dxB _vmb^""?1k&9qT43F&V/2C;l')A`"_{L0t;D~*IAG< ]&WZ YP|z/3 z=%4D yU.}ڄĵ9n" ko%{ S ![j %XOwrΫٴ+(ӆ +<t" 0FP Kfh[IυDw~)a,`P( Q`JQ0 `~3i/!X7R5lUj{\" ]9ex$<̬*匛Hl~v "Kk. :o+|:ç+zgJp!ETQ~Ł/mMz ņ5*M8Gi2L6sp}YwP^b3~M{ Dru0UZ,Ajr_W, M/X]S~tK}:/_M?Ysv9y1 Md5d 7&y~LgbTzύ鹙6.'5'׫ˋ s2\p`mXܬ?b-D\{/9r^z ].k씧dWS7>jr<q7.6daִךY-Y z7@bëEI]|Q_^ ]F |mDD!DӼ_On]V.'D d)GrbwYDZi%܄y^tml-_u\]=߾}}^\.>IguOָSjR8`ofK~}3W՜\|7qx7?\~?<}E;l lV#}ӽWw'ܺG>s+&Mo4]#9:*/q|4?A1\ T>@35,Q9.'1mWtE)P5o- #j<1 1tIq\y,sZ !I//֫;2mO|e$p6v9gܟ BZ?} =^h5/y}I=~~ϟĕ@&AO%'Ջ/&џV[T4) bw968]/@k7.8;:rC,}Сs\ m=D-%#n>)3=sFWPNt)SNt C8]E6(|bg0>̙+eJAA;ye qh?,&m M%& p: >3*- 2xc/X3L)[D\089MF Nڌn\3>5ι7\3E{N(- 1 |Ta%aԘLkjEv9;)- 1x%d'teF?,vw_PӟJTy:ac`"P-6:JVAE}{l7~xC乎xO8;v#?zuC!^_ݥ8;pXQ~:BkPZc-ԱX u:BkPZc-ԱX u:BkPZc-ԱX u:BkPZc-ԱX u:BkPZc-ԱX u:BkPZc-ԱZhb֩$8M]!844sZ"t]JLVآi.%yneݬw:ɼ . {Ԇ=%yjY3t}Pk95`ɕh :޵u#l{c0fn[[l]n!mȒ+ytΜuH n!pp-6lq+(TK+Fqif"NNmV*,FP ST:_@RlڰZ-|@ʈ2یe>a,jء6oX8p2Qr{%xw&=H͖BڴMf]VRPWe*)RP^-s XжY J-(}RIJQU .-2xKM˃:$s%ak6ʙȖ5K m ։b*")iED 9+9,[C m +JFH]z1H)f8lۥ6/(i[S%i[ eUS0 c'Bӧ/u\DX|7wxJ IיR#SjdJL)52FȔR#SjdJL)52FȔR#SjdJL)52FȔR#SjdJL)52FȔR#SjdJL)52FȔR#SjdJL)52F(5ϼ@7o=L|`\Vp"~1Væo}|}dåM@9ߡGCrr uN˴ _!mdLz4/rF 0+6k~U+Wc+ .`j{؊/g=l| zcu7|sc=P,;W/ͱ: PmX qf]]Ƭ{*O9_* jGͫʓe.6 K(\A%lc7|T>:M:9m:CX2/y]:lحǢ:nE漺0ùWhtBt!L~tE,rCW+(;o cxyGRb=E1MEKbB À@Xq.F+%t9k=ƘE3r7Ӭ3%ppq~1Fy^42ӯxs LRxgb^ lu6b[9U|[ 3u>z}ߴ´\Uk>'!ˊDQ l^F JJCm /'D8!a:(cڙ1N'C(} s̶c0a' J&SҼI[^UoU j2RZ-L :2kx.*51pGĺx%4ro+QEX)8f<)hո#5:uM[y.1r}]qYTᜪl\6gf53P:[Sgq9! fURB) ɕS9#һl-@eqN![3ak˯]Z叐o5licflz\G˟kA9 FH1^ UY,VZpbI04@WR jODFJDY‘rmn'|3+ν=$5}v H f?ju5 \J 0 ^°7HL!hu4_MlZ+4bUN$,J8G,J) WDr/he%gF$/cյOk5q5VWF$yG%#M1$P %ב$ibPn1_-+`*7.ÒVrh!_c2hUhAz]SXtLj,AGHUB%T^cD]$FxPh٪-;ʜL93(<ʾI ܥ, v''tmn݆`P֭Y1s8>>Ϗ_>S[|O}0ͤﺌ?'?<9)ɯM} 9&qB'^i*0O] %<Rd#^) iJ]G8N[2c\;JO.2z dԡrtJD2QTU`㯰J:DRm(9>E*3GJW q7l90|&fѰ^6ZTPc/X"1^s M#*MC>4Wc)MCi<4\)FL_}UqPܺ^O K| ecڛ(xE@w0ZvޖW;Sv}g#4sxç+BT}( ߌ`I$BEJ\Q FebZ2`-1'fLHLj]|R䋗/O>?^?(#22N|t"H-,ofui\OWk#3ϊ-.}}ckeD.mb:I I=gO,,IW7wQ^m߾>yL-ⷝTpjvi v|]./|].|S/,K_ PJ3W/ӝ{?ip.9tEwO菹IN宧SE{)ќ혫),(d/{ka7t°uOX7˩k@,WC5eȝٗϖxY#9|ƥs.V$I($J"!V`$PO1&$潆14)gˊvkq0 xa7E~b$ $ -?8L1g(}Gʲ6XWBDJι8̺*xR:ñ/+\H_qPZj;V].]6nwpdmB+aFܿSP`͏&5XR@ZIԡ:9Iu0jZ*,zQG.|dFFfnW!Z+Ua㊹{lz ='s{Hͤ'Sk?+pb)]t8zGO_HM*=9;YꇷqU;{  E?rh(ta0KT'u1_dm%e{qb $V) 񬬪ʆTi>pb}X+l.,q'?*.8.)gܳœZ"۫KK9d20 .nΤ yim7]k^bel(*][QPɝ&$I#FQf7Q MM=3\6qUU'y u‰dБF !*.qhX O Cc>MqWW/n?c]-:^iM|T!QADR<8M'CVϣwʯ8ؗ'/vҺMp+!,2)W^Fr ԂuE(o߆UxMs#*PN*ޏ NQ 7Mŋ:R 6] je1RgUnnKj5hΧ7` ك~}PLQ~}jWS_Ix5h䱺-hTsT:ݪ o1ހgwM">hxB&zTnxΕ ;z6lm #+e }~NoۼʞoQwScgkm~hr*\6 ԀJ6םI )ލx:Ը|N0Y΍\A`Fy 5ep%.GpÏuܜM6P_4xQ=y]f[?w@r3F e ߁ 0yݜ [X+M2=CթdZpR>dG4n2KO-ȅ)v*r&v1chBs, jaL㧋_k/^up1/ 覙YKu3v{?ߍj}7n=IR'!Wtnv3,`Vׯ "~q2YhZ+#{ r] N Mi'k;Fzk:࣓=}}c{5$|=v井 jF3].͜w.fλy92B5˙*se|92]˜w.seλayDT/(0gGC v,T/(4NR|Hb"f f{|fK k֛֫DZ]](Ċʗ'ZYcie#ѐg{HW!Ovs ..rqyHdA{gHjDΈC ERz԰W]U+/w/a[~4f6s?Fp-yp1?g9- P70|3&rH9$ P:a |i?޴a"yv (*ۘ1[o+w/wDP3ψ2T#!Hs'K.6k >dB.2A5!*+{ͮJn2[N0b> 8+"#?*2Ǫ- _9v[^j=qp3 +Vk!sÁIؤ(߇o؉Qh"˟7 y1$*0V .N>4+#J *Yl Q:%>8 @Fco&* ]_XqKW"/e>&[ |A2j iĂZ{#;aVk fm¬N[Uj~1YŪ+#+*qY6f+ 2.r.*2;V}vFz++Q æ-`UL|UrIoάb˚ϳ`[*|cd9" rCp@4bC5[& 8Jh;[)n˒Z'nz/H^Qߝ(j^+7.⥹<|.l7gPZPR niLdx Dal<Az^<_짿Vw`@yի˵ϭ tq|jt(Z|8ˡ fjlʹ1atC*ϯm|WjFC3F[]n"6=-MQ"i]Q"yÒi{T͊y=<Ċ}i*ͣY UYnFj I͢l]\ C6,ZRS::ϙk75*C_n.D ‚)'NT q{8!R>on[BY[1)  ~6k)(b"dMnz ՘\O|iBD~U_m6g><^j^H9w( :8? Gj0d26f Hx^Mf33ej4|۬RO)eyL9)1<ǔrScyL9)1cǔrScyL9)2 xtIK 0fOwEs|nv8mQ|c緃d^ j>a}uop\ۇ<'egRoDVF8b/F~fnd#q28Κū{{i6F?OzxU\ny) z Y$pM-}'ÍCޚG)=p4. d ?Z`15!*+{ͮ:HA$># @<)m."p灅6-"rbb*4R+AraGWG+/~`XfB#qXEI79tfӮ0ohh/lȉ@GxTZ(ʏl97NX'I ծc`5g(*/u*5%̾}ɜT{kN5~e92'ك/QI{֗Nz:*4.6 ֦"<SP_LI]{E?ܸP~a+%^oM)@2SX@:tWYvMT̬/9<JxN;&|{3றdUd.g{ C1֙JT2t1NO.îIZQTg)^y @%D%oOفĞTr?ppl+OF qTfrO28l#=T3>C WᾣI?\˓E@WR Kܗscl"1nqtSe܀ޮ(/vt\X9V|!]6ɐщx2 F{\yALxޏbddkCg5"ƣ%7@ĕ̾6 9,qDRVc%tyoßq2gH֛nU3QBLžu&O EXi΄w{qAH! 1m=5DD~LmՑ\|,|]*Lyg(# Ǥ˨L, 覗O7+j÷,Lg3(J-?ӭ^^ߣUZy[y?mS﮼p<`_G}(lԚ&]үɻKtb0jS( TS1 (OB 4P9p˺ƀBDXLx ZHCJhi9QaXz+%u_x9[=uK5dS[=VUxXi=*k¥q%CӚ޺Wmmܥڥz5>jб.854Bd-/=hqPMXp$x! 5Rpy 8iN*G]1Pa.htD|iam5Pm @heVDCh%6@6>ԬyTI.e$@=-7Ek 8&~w/9^-PիMt%ޣT~qyc[Q0W#-ѧA>1 u׼n] oG+ cL&UA*ؘ$D S¦liU5&Ŧ($[~gU@~L7%T^BO߫EɆTe"==:OG~2vgټZz$U`bPN։1E 9G6Iҽ@S)(uMYs;SPs0ԝٵO%P0 dGtb `J#-,l<\ɒ5,gOr"(8Y&9"PK UE#W$mL]3U }}}j)fUͺh H*822>OtTVe\ݫ],B@$q 0s#Q0U 8sߓ3VeȢ< W6(xu{s*!nHY:ڊV9ne iVODb[,0^1zX6Λ!PMX0g+V1hx=_r|069&{%h{I64W0RrI+=qߏ'=(b$6phUulV|KB' .΀qs7}x}?V`MCY/{ @=px]SvZtֳ_!M52>]ssғ˫#ݠtr"՞[RkN saW%9*?'\# Ό .ͦb2(8j*1GTQ*E F$ +K wJzj(%u}i\\tY䳞[j}2kBu1]Z)~Y>gZ(:CG /,NG̓riYw $L[@kY6A5ڄլ%q{*CHspzp1MKtp9-:D) ˌlK:(w+LD|1!f04JqTQ&ũ#"( -J)~OOFb=1: CӧuҝM5A-B;ˆ+G͛U~70L$`2#X*x4L;W^CB`RGԡZc̴C&zG-\+x;87BݝCp89YqYW-zo ]T- #:Bq폆vhΈKP@(ĭΨi#'eLq9,)A[`T G"MBDꥦ0+Cʘ1eSrS "Odi)/TiʦYn,P]YU@xHi#?F1ZleF͝QFG UTiON0iNPL2 2z*b-=[|6PR&zk3m(b|y`xu #`-/ } QHubX^3]&P^;nQ.NU>^Dulz"\] &]@< #o_ů_r gDZ{]$XIE}V#uUE|F9S;02GjZn1="{`r< Pl3HhQ2s?<Y0Vw"`ҕAWڸIX>q)=$*8ʝ7\ 3f4j|sbaJۀa7)+e4'I7뤕g&$#MazgS̖?A÷f$u۠گxc0c7@̂i1EF^^Te-^51\\5`ȃ1ǟAڔp1-;9Xo eu.^@9gYf52} G^* I ],&U|ozߪE^,˹kK<Y>Hr^Y";բr2*+؃͎2Yt*!Fn4HVu z|9Y b24H't>>|hяʣ,1Nd$_MF,$@HӡZfC BH[Bx cBuVysvnl0 o,WM-|dm]}S-uYT@zn&fǩ#].auxEiFގCE77;z< KQuj]vWmUR]U@/oni_/vKni6/jokQeؠf%zR,>D)&&Ч|jT[v,piv#4^H[?od֯od0֋8F/(u>1EVf>Y\QB[zWT`Ң`Y+f̎eޝAѧCrT"Qrz9:uP騉AJ@0A+_^bEq:sH+MLԱoG{QǾ_**u.jj/UMp:ɾ{)aұ$[⺨raak]S t!ŤZ7n>5z?]oX6 ?+`2/D@*P[X1c2bRDk45[!-m]G]p9_lL&H k.(qCvY iI(J1,,VJ!Q=k6΅cW>9fy׊m7]orv햖x\ dsc?dB}3R>RLSNq%bdFŀ5sD k_rhkGZ(ͭ2{yH%h6[^nڼ 3?崕U*P'ZK(Ir yI"R:"2` l@e"\HLe@YGf9AQN&MLibiy屯Ǵ (QHLd$&F:łKsg H\:X4nG24OO%-cCz?)ޯi#Y ׃C^IF)C$`a%;e^iv~ (2n?yl MEw%vKi㖓.W{lb 7p]\*|1ף9V.Ր]\lPH1xk#H [Rr1Z-%G"2M8~O ( %@1 Oc*( ,<O k䛍GΎ) :`@5;fܮ_}3{PP4K)ӓLӎۍ.M!.p|zZVZ4|Mmm<()F$(c"WL(/T- 8B!-<::|q2e1 vYH?FA$s2JaQiƝVc  0B'L[9I 4Buzg.:OYA`f]Rx1 q@+D6uVS1 QPgs:bu-?#RRM0HkK$hk(h16Ls#1P2&mi_Fj:i;dJfqe%VƘRQZ7O Q!\ +C7x}V!F=, wWL.L>(0vir%}n@X6;#Iי҅ӆSf V]MKZ^_f䋏:==|)(1V+$ϫv.={MSV$ٿgmv]r}Rd?Ib>|~[ntѢ }" us><0 ęttS@q,Zx`Q '|xouo^^}zA;Y'P{ ؀&IuOS5e[NA͝wPwf?@0iC()t fjMa!?#~a4O/ㇱFՆDG/LV_l2w|}}}j)f$+!g˝VX1W0c]<_"@-514ojh*кYW ~&bGΰY DRix`,fT d6ĊIq?+ZO/jQ3y YP0^f(A48R$#Z{Gz0b^s%cmmYV={]?5Dxk~xuݩ|ѝPjˡGNZ/遳8YJͦnXk3[U5m>Iҗ^ç϶ >~T]WxK0%r-i} 1Qߑ.I@u1pZgкjwt;V]~5e-l:wnyofimw1œZ#M/67 h+ٮD ^&U-ZSHC3KFDOAsZ_^cC`Q/nb&/ƒE*L<9X{)))RDD6e,xGI4 GZK{KKfb._: 6Z+҉*Z;V˧O5vM]-Kq7L `mrGj+**.p)&Ÿ= m4= mOC4= }@ZtZtnWOI>vk>'[Sl)F%L1 5BmlGYys;S2:;y6)np0sHg[E‰uXTk}Uی]L%Xi&&VQR1qV{E NVFBfҷ9+pHZ'}2a/ ٭ 3}}+jfEbˑRƑz,pT)Y 2dTp1japym0B6cg^m2o`eseVi{AK|pr 8M(rnSiQEw9lVJHH!,.AdWl}bg)AS\tݖڬMiK(xnoS9?*] +ûÍ+9k]&Q<P騉AJ@Q/|Њ9EiE!R-Eg[QMy`8lu@a"^·GUyzB^^ I)Yd V;ŤIM3ð[p ~9nZT@.䥤MRnMuS .2h@Ԧ,Xrp&e3ShOJҧuΓR0(6(߳\GTExXoQT8K+S1-4MZ%UH{ ڼ䟺!v˩޼]) ],YaL~ߌ17- b`T?RzCa[K}򲇓.nW|RH`0m]"Eg**SȒ{]Rȟ#'w-|y{^z* YZtI[j#䩪sK6 wkݣфOSh19V+P+)h0F Bj#995 - v `CKMbM/HlPA$s2fcEwZT+``#Ay @' 7 Jë-8|nV^[B YTKBJd Yg5u3Ģ t6ZZ߽Rnf7Ҫhu)icQǕXBd\p ^ƣOrC! s!e!˷_m6aAlkq<0~PÛNz1˻n L{nzf8c|R}J3|,XPRhM}S)|aytBf at'A}s/˦&b2Ν6r}9xCnVbJ9UK,tNPa|.\!Q6VGɹ% B;&)Nc5>|kZyvYV0sC|Zu*]-.EtI:@CzS`VtIh\ J1u&åcPsJ`Ȏ!FZX.ٰ\ɔ ,dםUtDK86*lwGM*{<eõ\<7hU1x{Sbh X>~)j|]gӮ;.X6JŸW3 z-s)y )xo@u\Lx_f8x-y8xXٽ\Y$mS olt_5y$d ` sǓ64`-$##H5N@.ZsSdsMTrcgSoBjnNUNټsmzk~)?P|Sx?qM5֗K{rmYI?ro?bcr$ƗtjzW3,'i} +V1AwS/t>{+G%h$Wj`钖#i>֠l:Ib-aeթk::w\wx >O=_?}r:;ɇ`x}+0=*fV(=M CS6Z7:j#W{{6+|j[l JO;wܥ7ǃaz7өn;$q 榯d6,#u9[*?'\0#ATi\WfF1E^$zIXy#*(ɂÝJmlOEV5ϺԞ A%Q9a QDϴI*i4 h$, ۘOhC =m)5\M{eT{C]-\ƫ?W&n-$%PX=X|qe }]8.X⨌13lkb!$՞"etJhƈf֘e"Ȁ$V T8"&<|UƗ~sgi<@0YBfc@u8C8|tdj^"bk4(.pGf- RD݋M.]k9 5?b̤-O&ڸE14#˿BLTaXx$$s :%)bꃢ(6I)u(vwȲ( DRJ"@:Z-Hvk$zj7#Ml4-`^r 3Q?@6ƒ Eiӽ2 e$:^/KFlюo,wSxIv.Dس9ׂl"7VaI-Þfomc:YG|١eWȗɭ䳩5.Ngq#?z #z!U.Bf0z)#"b1h#0V16 . (;WMO.>tgv:rDrB}jp] vxA/̥5*yo3\vmDŽfZnpTMp\)|Z8B4ñ&5v GۃbxX4HM@Yo/#Op Bne |3y>P`B<UX-zg՘Jxe4zl"hnDf=gS~PB|>UwSU+ IMR9,_vGE?`)PZʩR ˽)7F_٦EfoCh)hu@vV }6m)H&y ?[V=P2zEٴPGSzKp).}ԖT-Ff$X XP1g&Bk/E8Ե%uwmvNWo|Y'6緹<hy4g=(f vG J_B:`^em) Ӂ` d|/"88f0 TYGb9AQN 26C4 M+7\,=4hL+ S($^2!YobA% 9Pg HAc N˞zJ)KJ;j (=WU.XVby7;p|_x/'7llᾝ8$8@fL:LM -$}{.gE9}h ոQ6'fY闪zd:1(4<"zr Vk m=X (qC> PIsڅrbyz Kp3"5SDc)Q"!@%6`8Ja8$24|j^ 79ab0 ҂Iм#xU Mo)pп{\A=a|dY@WLX=Q(ֶ4Q}ԀJ6N6(ڶ<7?b-I6x\j)EOܧ0!q%xVmUuM8K*C*|ua>c̥*xfG8:_?; ?/`mj`[FmF E`ߎxSJ^*&x5 ZѸӮ &NzWur æzʋҢơ;mq҂oH?p/Z2ὤ{H}`nq@(Y5iڵPoF K2%δ&D0bwP.܏+8ysC**d){iN5'<9SGlR‚mfq0bFiqq.1rQݡ=h+ȥ6^Ԗiئ)'Z >̴/EkjXޜd׼Dl*i׿H+8Y2*)V0cp9#NYDpN-^>X<L _b %hnKȳilBP9$JR,UFN 6p xEc HLaA0!7P (ƒfMj]Xőpha lLK"2*#dG( Dci vd*#oh z4FYfԌ3"v@L8y@` J Iihl\_4EU?4r>onn6暴[ uuc0w/|N`yOdwܧ'1Dzj[i/6$Q3-9˜R 3/2ƾ(ho3>Rh*U+t@Hmՠ "wE%]Rn}/{.ّm3'󨫛9B-N%6](5GsOQ|dI푄 XAS() *FP*›7ecO;qp9aE:OO?6 Ӱv57;V nkI.zY;* ApBvJ߻-t7oK]ו $]&o5'ޭ ׏ǖ<|] Xxw''>1wT>*~]!o7?j@ou07~~PƎ^OMn"]ӧ'-V`A%*="5Aէ?F[0H)g JMqfRTSeԛ^}eд{3֫nw_Kcoz obdVvԕ$~ LmrʴⴻLx9O`j\W'Y9t2OAƳ]v>np) #Cj<_vsza[Vݠg5w` /|˾9j֟_R9E`|fyὣ}VȒP}K c_2=sK d#l2 `l2=W$_^SJUg0?}qGL%}qM_"VVa'.L~z"\m&J&KZX<1C=BeEqjcXpf7: & wLIm@u !R= Euqph+l iJ?OnŽx11CDcv֓5T+TV m5dV-pڧ X}>AB~6;:[-{Ԑ{d} +5|Reqc޻pG>٣}&vYa6j:-8k;4q0Z(cRv) f%,(y(+2-c1sI6c`bҙVgpb#VՇͿF:V?B 0@ɵ r B($<3cĜE NVFg1܈q/ rLCs:?Oa}vQWǶC/41%;(iq^&L/3hA͊XH&i-€X0]e'CN|EPweMONrMh]͆R~\yn:JON y"-, >c`g Yy﵋^Js偷򐇥oT ص{{hqT*ZXtIH [RTDjAi?)H==LT|obs<짚v `7(c 8-B 0F" &DK 2\.Wwב ȟF 90fE^Jy@ B =lXvu0u~'-5N} FYM] (39ֺ ~MMD%}j< +5 a Ӝ)(ݵ\GTh"A.!5݄ 竮7{\ϒ H_z;[ S3w{QrĽS8|:ѥt MKO2oŒ=@ST4j qp|:X2$dP+081S+tr.ҎnO0˓0Y(X8YH ˥v-НNY 0c,~8σ?N?)H1ۻo|ztvyu(#kCQ~K?4BN8Ab~9q⣈Wը2fQ:MA9|pcFUKEâbp37si q4<(ϖ껳]p Ɠ׳co~0~jXeW3Wtٴ i^e#I)&)>n^g>U ZjM6}+pgrwyI59_'S`mbiҰ^4Ulu_]u9m~rúG.Ya\D鯃oye*ց>NjEbiK[AXڥ$}WGL %J7] 2la%hq~>k3`330T d6Lb'/IQ)ѹ˞NÖNR)XFfM\`h9Q&R X`JU@B!'Z:klu-3 qnrMfClN5S)s[({[k" &aOAP}*$=t? {Orz@{@ }@}@e﮶BtŌ[@{,(:}㺭6z|*O]oun7S19zɺ4+]wpm6@A{,4gmH .Xbc=~<ӤBJ A(9"(LO?U]E!P@y7|(*AʹT>5\΅Z MPD _7C7cwn~?w{w{kn&% }vAcz גvZ*=Wq|mnPf==ކtL2gO'AOa)\@Τ;܌,g C/aN(Esfͭ$G~Zez۲OޖUYwUU$ڥ\Y_J2q+$r]NE1|),s0oc_Ng8Oa]Ao$}?z3 #zTTG"`"RSFD)a$EV 1{mYx~v9}b۳"~tn?u>7W6o Nξ֗7wh5Z`(3*h2:ڀmTZS0NPD2}Zs>Nb[ŻE×`zR.\-:-xqA%xJ+6 c)1Xp9)LҬ;QVb bIOc?s ]HjoM7cӛZufiD]4iʕr Q8C?j~- ?ϳ#bib;yVIYviMXK0HgDa iUp L1U |@N@ kၱ>rhA^n9bhlgDc:ݛJQzHLdu4(@%LR#1s)̪k9(mIiGMa E}' ժ:#ϧ8/:lwps™=Nioߞŋe{-Fxk?r0Y*hQU$LgW|6&J /Ͻ<\ݯVkI5ۃwksmMxyTUܨ5u,=9)L t3͓,Yydcb"k/%%H *XZ2qDðqI~ UoZ<߫b2h`-"n&C5ڲ[`|2tYé0|<\ mG&((CMۂJ>N&u/ڷ<' 1ɞm<.tn"Op5pVNn}8ARbIeHvkbj%> 3Iޡdm6$JxJUX^vP n["V%2h2DZaZW]wU:Mdоvվ*24ڮn4<+-j>xG=p6y\ D nvW]ׇhOF"%[9g)UΩekNb0rF+HXp)A2 `ڥjFYJy>%KȳitpWyJ"^8Xm<J@&R%-金H TaBnP6aƒfmjY$`:1Vq$\@uXEL*#dG( Dci[ irg*#'Dcp 1D2=ubkgD(쀘q),=Xc/u2"RB/{B4k~ 'gMkwg]~=Fr ە0B[] mV)l8  kO0aDOv/1]C6R;FKa{;zܷ Xc!ǐ#VhR;PH ])Oc.*5LtcwCTlbjExx296&8C]wBTXؐ. L+h׍g>yRe׉}'>߮ᶭ/dz3inCmJssݹm<_/c7T)k(0cP ݩܨ]g-g#$.?>A #̓D(g&JrQ.hexOS]]m -էy>W-]W{%^Nrkiv԰7 cu^`lмsQm+ln2[hRۯVx;UEU[:&|N|g Ԭ_7zJ~E} ћo7~*S!RDuU6<U'Y5G&py}'7i.=Ucx}/^@-6yPAl.p0VDM6[ߡ>&/>D%ͽozim4(ڶ=xBq\2e9c*1Jo<چzc Q"+!Sd)^[n[X)҆289n]zheu'Ē\֤pOWڇ(O6O~ZfҶ%,Ŕ`$KBzR"sZ8'L~ژ"q]r4⪀&@k^\(6z98"" V*PBWe+c,PCe4~^ψs$>N,SČ3"{J'L\hlbqQ&*Q9ϴZR򕛂zxz$H&! `e Q' w.edF.pY*vˇ\^F.wAQ,1`JpU3i/ !;-9j$JMp{Թ6Q>G-o|0 URem%ťa#hbg FP+mʆ@3,**#= e,Iѽ'!nitq?.VP(ƠIyVK_%3V]PǘLcRc/H& UڡK٠L۽Pc$lgjvSZTWdpH"?`e{"KX êT΂zx;Am~sWO3\,,K /i^y!x_'I~F98fk &L:+.jt].-r/hW7_g//Jtz>熯{Yy[z'`JxuT']y+$D|89 ¹|G{efЇM_x+! h3휣OrVҎ9gVx[Qw]jMKnDYӗ;3d0nFAA( AHQ4 FE+\E. r&c -!V67aY1TR\)2Yb!]KLH|YGd&}vXc9@MZI֋\)ĀI[Y+,leiuvLbpyYM]fh.4ӝw:o/"*|٧ᅗK`Hjҟ`tvVeWcuy5;ˇX^McoHMo\ͥ /xC %( U:eN>=&sJ$2qzk~eژϠ6Ur2oӗtFk{kӜ?x",ɲo:9HyVJՒndA3 '4~),j֢jV^UǗOqcc̕uǡ |ǔ13ōc;|[15ګ ST6R6a;԰@mېqgvA/?I'Iʭ;IogwZb!H*"iRg?7]0:~ ?23|:Yjwj6ovǛ.f,^g>2Hy|OfEg&ƼKxJ&0 ,qnhg]ܛv5u 뤳j}-&g:ݬx:7eEcޖ'-F__~A&܊;k*⃧^=KH.{}`4|)d[;d;I2KR2B,*0P Y7v?^3}ưĀ@19#a.#`>pZJcz~5"!|r)qff)d10&VsgRĘ qm[jOÎRpN\V2 P:vtp( st,6|fpv ]{rjN.ڮXd oYlQ%wm:MwCϦ-.WˎaYnI.>672yy̌d\KohÅ%F*4[xL|AqI@"P)7ricQXHPf \y),8"ybd.`[9 /?E{iZob0+ ,̺ I !*D|0ֲ,`p3WLe#3Dy!`GC%xGOva}3w.#նzޟn ӗx޹{-?>FB4ϊd/^BK׽Ҥ4k<%~⧋OO]*u~K2_0J~L.;9Q)^; N@{ehqqB $iFj')?0z28IWa+.p%d$"-^Z.L,_ o}PLg dZ/#Sy^ޥHݢ^קGYF;?Vn[Nhyw&ĉV뛕J&=q^]}1uL[sM>Y1}s|avM `.0gSJw%'Qo8z]-4^10Ҭڑ^?E0yi,;bژZ>  _ߌٿO6MNQ<|M6͕AӜG篣8FW)?`vjN ?=_9ч7?x\yoH00IHJ +{l _1nkho>ЮY}\lSn.%27}\ܚ7Cys?I;~1^&6Vf{d#5T]J!{9/H}PWdǺ&p] iߺuG+uu8^ S~9&LsSr ȁv( =5J"5K)B{j\>W5#b& gG2z:Mp@]n#. r1f;sT+PMqL!Fu_Hk4q)vtwԔ[T$P`Y yPwc(ʤK( Z 4gP9WTBvVZMr QҒUT:J)W @'+yWdr]C=(.boRmRYtTm8qQq-g)'M)3 -PZ5cKNPĸYd08 (#,2t lƴIhV߽v.P- nmVR)ǯZxr습U9vzEo/'euǕ=C}ނ g >e \gc}U"zHHr{>m%KR{ΧȲ֡e7YeFs d&-v{bWӍ;~$^ ĠJaoB튶ǽjIwE\IG]4-tm98}6B OV ~%sټޟkIϮ|G09uعJXf.ڷഺ0R:ޙ87yTJ$M%Nh4ddF^+cЯFInZ;Z|î)l6ɻ=SÞ1H9HnJTOD W%[W0Jx($J"nC!Z:{.X^D6,b4%'Wک4cD-o1fjWa+WGi`]S`%N@- l) rAklaǗA]o<1NY4R Kx0{-m^J &赖Ar)q\s9$N1/S#)$"N<'$YrNT.߼eXKGF5`bZv ] [{BG,@-YHg<%C^8 u稄LHv`m; hcZ8BREɮKrAOE*ʶrADJ8ivWTu '`$A2v@26=>3R5{{{o012MZ\>ͦף.n˘u%) dkMr+8{A6=}Osk9}nqpݟ4|œSDp.ѥ{5Uix#:tތic,Z{8l&fL29x XfO1OX2dS qǙ}hea< MMU'1&IhIAYoub8cyG4*FaKITc411sps.J@lr0 L*GeG4ڛ>J7x6-\;DjK6Uyk-:UֲVZgFzuwzr=+뷮]ҭ]WHZU'gfM]=5v-ܽ]Qp&[e֛Mg t]uER#-CQ$T7_!A#|}] qLJfȃa=\ _=5,(̢]#?F\=kݾdUoOf /g)bI"Qgyg_LC^ VaPof;_xS|2{|oR!|cLoG{' ϰ7(r)fO?㨥d6dwk 64uI (ˣUʴLK̭w[*oV؛f |P4A"(,j&dl Lt~ko[i3x"ϗ/ Hgw6O n^o?U~WAϣyQu3jo|  87UA=nmW\'f:'5[\rV}_5fƘg3 } vq9ɕI1HdlH(Xm2Fkiu"u?>;ڝٮD]QmoB)b2 *T ."-say"" O{lΠDuSAыEP7S:(d#1 %`DL!LDO,hn0L8X = k뤥/l c}~dH 9Zz~(^E䵃X ӊv \2`k4~߻ߞT'[Ĥ#UK +KU*k] v/c0HZ~DKAddf O:˧ϗջoɄL[Ap|1J䦞>_6 @ǀ8HcGQrIx&F+1 cjR"'9mi)mܖ;2+s>p?M2w'-e6 (߇e,Z'Ɂ-df IsJKBr:*@&cAsmIMb2x0"gTU5'm^3Z圥D %y KA ' X%%8U]9u^\)YʲN*]`a>y[I%|2'Q!0FSG !!(M9a dN$=[Aj; wtNXwܾZѪ}=H=_fF緃?swP>eG[9;?DN*N[]* J3~y;vM^AnʭƋ "O?2p7Q@ _+, ,KzrOΕd0b"^I1 $ 5kUG8tByxyq=`:'7º YeY&| }7.ږH٢|m]G2bҟ'~V0dQz"ML7'.@}s\:XSzP5|#tR=P>fvQ~'xS91WpFjm['`<x3+=;w㖑Q#q~ah0̴,AhCO V0bYnpir\JQI6W؝ݒ$!62>L}Im㔜:j;bCx_Ugqx~.ٿ燳oxF>{g?|OspR >_EƏ x] 5C3wt:z+r-f1;s[ c~8|;,"!'W9n1AwZ,a\yAhd_Uӆ]T}*Pn!b>Ҏ <e-|틥&8^^kIJ);͛as(w'P_6_; Y[u,k̈́WQ?߀IutjvXc=UX(YhJw U`1z=#JQ㐼#޽ ͡pzIHKg4ػDFI6oU6pc ΥQY" ce?GozS3Xr]"O+(vG__1P/ݤ1` 66=] Jk<|E/Ur#8-rET>21JM|U^kZ<`̺/b&0̈J^K9 "1lAE6舥:IA=w ccV7C8ܐ(IkvaG-9l:<4|;|s[LЪ{hƵ?x^-䣱%,F" SAbu?q#2a7SB3o &HXe#ϋ:FX; `'F{@2Gr֫azDY`d!R~,Y3\zL$}WPFrwV X:y]@Oy[`3Rh!-ͥ r!DgDM}ᇞjCGk%y4u#$g#_3))9>:cbDmbAƩLI-jXzMt6fe_@Ҙ٨8?ߟr9~Fů[eϿ.6 rJëJN;kd4Hc,qjk%ͱTx,Avk#g\v9|Ο-HjUvH޻xL+Q[]ٻ.lC^ٔ:ܨ.G)l(#AZn,4>>7#ḹrY)rñfs};X`偃%^Aټ)+|^ ] zY|78O\5Kqx}Vs*Fj&&8a.x!|w>a)K3kQٙ q ,&:)ls.ぺ.tɱR "i$+DK=ɑҊ2D:RX.U5fޖ7[Uvq PaٖdR4 Rc9TλHR9KICb$GaG*$(؅AĭHf &Nd(^v9Eyn?RzX'_9.n _{b/~hy4,fPi8P 3(y_`RyɝAQTIȍL>&bX1}ײ/H’zSAPֳ@B^HkcNτ IFe;9be$cFP$aAq QFI{+f>BZK>%%lB1!]BxUnQ;ev?xa}:x; 2w+D V<܅(rAJ!nŃT [կR]u'44F Ӂ_KZ]]uD=  qY%F#}xPUA]}:Gee. j&YY2~:Y lEG; ^"7LVQ }U74647W4R퉦2kamcɋgwx~C^nRnIW1@pM 1A)FRX Rrwg4YN]MV𽀬 |%P'RoS8_V,[[)Zbs7Y)`äƟZ3ÑF2{sŲ%ENYzxbXDˊw^]J 5lBü #ment x~;]nt䶎Rؐv~IyK"B(n`E'̉{kxZB{fbKm7ӆ6f LRE܄#,Pj;{owmn[EMtnQen҂DtNz^FamFE?q؟v87 ed!9=a)ZouB SGPv 1<{ŁO[>Gedžzsj"G(O*9Zx&l{>bdqŻQvT+hiw~ ૪yqDl뮻nMBb։v[K4 !8a V8 ɉ sw&+X:І`"5DϝV;D!Ra 7!)xDz9;߇ $)KcZ$XtE)"I^'&) ׄ0\-`_P5[hBm7>b yTtia|] XSHcI3b_-W%cv!c^ayሳ)p*5 LpŚ1$H̼$^HƦg_A"?5e \*#wD3Fho me'gP[3`Rνu͚dǰC_4_D'mW~I'vpPĈ"e65|HMRRSbK5E.V>{eJO &cU?^kmI %ȗZrV:Afx.ϰ+ã'[}O9HRh*U+t@HmOj;^.)D7VG{p.H''7UscC]}⾠^}]dž?tiԅgPa\_7*|[ (wsU1'`x3X| jSzx+Szhڿ鬈sP E5:kh$JƳVI:T'/eg%pEg%h9:y JMswX'VD՞i^]ѿi #-Dd&JJQ.(^{9 dc [ y^`yJj)\*ZG0pJ h)V528jy@^)@Ƿ5xْ0-SOSW ȍMRtx{6 b7g͒r2TݲOAֳ]v>~p鲳FnƇpaSwj\DɷL ݃L= /zE5g۟ B9=`\}f7wޝܘ|Jr˞B;b/gŗb첬apvR|jnzS@*&(] Z_?~WPh%%R"'r$H3gtt wgpl^6/LJJ-Ѣ2噙aLfo۸]Hou^}k|ͻw3[ :v(lV"-O˯?~mܬD(R!J --% ?}օކ.4P@Q Nx), =NٴYfVb.V\k+|i/bE.h͋]&BSq%y; u~\w*lÖ4%KlMSL(,;u k֝I$pWnZԡGun s0r>*'T*nq] qւ 7 Œ9K˭.#:F *uCMZ;[S$*8ʝ7\ 3f4j|lHbaJ%C{g͝[e pM~kN>JjAyPǶh) ^Q51?J]_èE jVBO6%ܕLkZP6 !u.?^sh΁:(wE!M(rdZ-QQ晠&$Q(9&r6ȹQjEw9 0*\$JCFrTklρ].qCʄbO?7dTer~GFe5 El] \mb?|u(ezziF~Eeo26AMt Gab8}3V7iZc0î>բ*dNc+ojOιW-bEMvJz:bb;$U`Eg*+Th%:2X J-T}RH.+3 Uu]Devٕ .,`κpQ]aW";ptܻ',JC^ y 0z Q [ZcI"mC *uNfۛlV޵ltYq5)K͚cYA׾9rDr3q_+aE~a aɼ4nz|MJaW0tK/u4 sY'ۗK]?cRRMW^wU1K rH# be}0z)#"b1h#0V1Gkmqz:V~u|5}.pFKtt4ɘ@8Z:y-Xhm 4wFm6*GT KIw i#h8" x_?6\ڠgiD>.xOiDU7 z';DYen2$쭡#& L iԁ)wj=̝odt niL{#4@O/Ӧ!Keg\C03aaX֙B + Žc(\^L (jC$Q GԶ L [mDAe5ta!qxtv5D鿛|y 'Z@J+ MðAe1 6JMi,ဦd0FSH5A5No6,4);J :цvj_=^'aLǛ.ecZToG-9NZ|:+o',Eu;]A| )+gJLyIPvstg(7[NRح)Q9^Z|ɨWB{"057t_3\1~\.0uo"vbM\ {g瓳lD`8z\77?~|`5ıl\+]6-CqEd D`X'8tIAWk.{McqU6:dӪM{LJ~00pU(Hlb8_'|äpdqzW瀎vt?}J_?}>D~?}OpR&$y6| =KG,Z֛-Maiꮧ}]d[nX!4ٞܚ(Rr}Ы\8rn"KM{IJz ? d~9)ӫYBEGwkȗtSH~P2z&tQ~}a}M<:?:xU_KZ9(z g3E W`Isd *0wHsIMmtԏLupxW1oNdPq,RB*ʱԂa`8pK wJzj(%6t/&?;B+y'm4% &-}0{8fk)1Hd+^blJLAbիwV#:CG ],NG iOLkF0_jP\@51mrkCpbPy$7˥*$mUmkʮ]OcBo]&\GWPA UY-O\Bjs~1-FQ Z%c1H!۠LnAR|,k,~[qBe3.U2%SDȏ1"tf֘e"ȀE (8qD0nHlJaz Y@SETkn%Xeۮ8k$cH]wn%WM_ ,VU{Ht[oU]VV ǣXyQD甭*/eӪ?ZNH)g ʙ8`LR=웾߬ؼ% `i_q3ZIM4eKT+"(\ 1C8"jZUd* ,% :o띱VcB^ˈiDk45- ihYqk 5Ur<L5=l-ϤUbvEו^pM\ Z(A_AV1PTD,u^n1Gzi3W^vk;qyzꞧZ.LJA&ޯkU̞!Ϫ{J*ŞMz%5M׽lmK^7IKf#r` }v4wt~U3:vCP {"g[l j[zw;C x\ZA!]ɨTDc+18JF,V o8a9%ECsr[j/p rH"rJT 9 6 x189LJZ#3 Ä٫u:A8. ͺ%WQK"2S!\wp^kmI 9Zr;Affg]2k!_{$|S%|o^CT.a t}lC2@٧F.OfPdN|>;/ܛ^C P}QNa´ɜËѶyۿUdcҝʍZ;%FgzңƳX!TKE埽&m\MR+R3EyJD#ɣ4Ԝ4ԗ?tTV*ADc;tia{T+B: ЮUߧ/h҆$a \[Xb<+~>Vz]&ϟ'?ᛛ;^n7<]{uPï@ÛA \-p}ǝxϋ[Kݠ$ju@ Kq aTpjRy};BkI[I w ϪܶdE/w׳l)ona 淃??66#g\!r -%& ttNB$uI1s9ZX[tuß}2Ȱ2§88͑eQ*DRI3 u: #B-DRT)PeV֐GMl&&uMz%_nyd?Ԭlr*˙ZP6 !!kG>?E;ڦZY xL -H&e jaBaJqTQܦ:2,"1r`TJ?"D4}8ӀC3%y{}#Ϸ>K^jE@uW>b P+XL=3ISpsM7J#f;exfJõstQSMkLi`Ɯem|:ҽۉ=HVtv)p[E\$LGJt4`??IualmukΊZ`x`R/FSg睿9sMvA6 tLtA,z)H(ntn~.ı1mk>9/.d-%t<D~٦怭cmc^A"p)j Nw9.sh*]W`SՑW_%:m/^ +CdxRflg{YDƔ-ԝK]WlElJެ3ȗ&o䚷4a%j݉ۛ* 7K+0qՂ;)qHqn9NS78>ɝ+g \:VLbjxt7^/fn7\‹-OrgOyqԓY1绂=LJ u F>YoaQs \ڗq":0bQG"`"RSFDD b FрG!eLDz谦rsYlj/6 lUOcGݠs=-g9u͡W aaD\K,.{l/*L# fo\6$2O Zio5J/Rw n, z$HA}y$#a>aYG mG&#Q<0Q,-pD˳ו^m$JWHm[mBôْm%^þFG.wH\nmU9C>b)wQA7=Ôda{K TսCpEm7ۊ̪ArNn;HuUVjĊΠ_x3ɰD5,Hk Tjq]UiJX' N׶t!td><-jxqtpW@uF{vw90-E3cfc3)9}O*7ݔɩ$Ur#\r3p)ܪAT J!ܠgԫ\!Krvt05_|CN%|{ ]cWsz iף9tj+ m5DjmFdj#XEc"[J +MiEe [ǎu+7%0!ȃ ǨUڥ/FYJy>% KȳΚc$H:.ZNn3C$ӫ]` 2[Tu bA1pZLjES#\1K`UARAsqlV$0ÃuoR6QA:R" @A$s2% J3K`eE).6)Z̐~v:>~D`ZdZ 75BN8Ovb>ހQը"iﲙ9UyvkgqT[ŋe|Z1 c`N/v;R=Y۩CコEAƋ 8#1xaH0T73,oh} /|%)uN\uF&D-dӨMsL&K:Lה40kR7ÐR&qOp/1N_ҩx[v镯3׹t?2|?~/^yz7? $;N( m CSZ:jWˑls '9f1qc[W7_K>:wubK3DjN##6Uy/funCE **.] |Y?K.o:.R5^l_ib}4Ml潂w^גc0JStaP+ϗO4o#tgexPQC{Nj]A /miwHXMif8`q8v,p85Nܟ,;#َM'R"v,.QJV=#8"YV)hPgF(-x%;f圹1{흑~jQO#^=QA-2Ґ+g]$ ś C'j00_n"P]i/aNMbb`d lX4蛒d 3%HQ7&Rg-h `ZZ0|1A<`d@#DXS#0vRMY\Y26@;ukՋ×TܥWyv[9hGy+F.L.*;CERU1^ K\cI)O0r/: ʻLtJJr9Tw2lw.%j.@.+eUI$%xrb404Ů]; O6'(mB}$'b=7q>2[X\4_K_O̦V@ l#9Y \iNA3tBILtTT1 yG=0Zw$aI*mI:hL@QB$^+1nBxcEm"26;1:Q Ҕ-Xʂ j{tgFVS󢄌ulˎ6J(iAK@U?ZEM~VL'cFE{f R ``f\c3 J{ RIWc+aB)B/3ۻj&xP=LgWjZl̂\ڷ袷co=޻V+hHIrpT1kl2 iDLi}Rj`V; QTdO -ψhؠ6\ܭJRdH$˴hdbq2ΦG{SFrGu}9C'ęN ki҂E)`T?8AHNHy;ݖ铤.)I"JFG]鈧&ډYMT-PzFoPw!A#hМmR}w$|0йa1$Y5n+DwKHkRq(Oe<ÿutN3 c/d"TT OL3Ub22#r|!sxR2*LNL6C*0moU` L4B9QW-W*SkTR)Fu!Օ `a0*?RWZ%2zJIM~-2ÁAjìL%4CCl"?z&d~cv{{7s9~Af^=ޗ*fٶ+cZ'&^ U^b CK/UJTi#)+ >je R6cG-cWK1;2.p%h"<8B;i`  1L-Z%ơK빱1$IUI$%x4O9?L+ +\Uå^'O_tܢbQm׮TRD#dc=tH P1CL} ZR#jɔscWs"S;@vvP;oiW ̻#Կ/po\QTvPzN%4z)fJtxjԇVIEwW|q4Km|Y.; xPJp DL<IFxІ eF1+X#@0|k1%Z,0.7 [o MԶ:OA|j*kzpR|Q]29p֨2RI.R3HR^qB<%XU 3ZYj6X|^Ca+ln|Zc&BJuag'~?J p9ӹrSs}x}_m'OkO/=|!wb $Zϲq;Чo~ܡ3>^kvPOX7Qn$?}Iݍz;OzBwf^(6,@ E2>_K_h 3ʍf\rAiM>+FaJD/ r,Ad6BrZW %)z uHuRVcHDJaʜ/aߋHz*c5vVD_Xf}W*;>̦V/@ ls:s%f&00訨c"^C z`H’Tz tJјRQ0HbWhkQ BxcEfm"26;1:Q Ҕ-Xʂ j{tgFVS3K ؖ%mQ.)i EX}'cfA{f,6xwZ/7?H(`($0Е^Wr[E%b(-՘P0^Qէ >ϲf&i^_ge`Z7P>_ބ"Zk/x nǜ5hgz>wu{ôU7`fyh:S\> ̖.f^ "qC:&h`PL(qe(%jG%F&dlGUw+-79=<]k-, v칩֧OQ.#=Jfd9 Nd=Jh\]`BB143^~j:XXUdSzn&m e| P-n\A;rmXV)_E.^!}Ɯl><;/tt{p2A Ԥ{:u$m&18mf+*i2ǙzNYWiXg"Jx'uլچiWyAʻ儷I o\^t“lae5Ľ^gx9y&ݞ؍*@kﯞYZ؜m81 <@ı*`|BҞޕK{8j'wcE⮛?Wq겹! x=k.2/҇l41RIE8GZkI!YqB 0儡6`QD9%qH# Np g\F"F vA?{WFr@_}nE֎ @CSbLZA[=EQeaL4^#Zj-A`_ DIZڋ M@ H)M-!m*&@λcx тPpZLjES#\1`R L*H@*(c!_Xp!w$- x~"+L)N*dNF03,*͸Jb,#Fc`ជa(ݭQ{K?7n-3RWX!%AbQPgs:bu?n?jچD^['A[cFA3asB"1P2M$hc)HƗӽ5^3w'b ?K$ϟST#ɕq'_4wg``{_poI1|1YW8Ib{iן?ffM~YnJ ,G 3;|}ʖ ; $@ A KA)L&n'Sa^(*:mqp,^^XM.ƈ70o}wVLgi44G,ɏp]B-vZVWHx)'kCձ۪ Y9\r~&艋N6>5ҳ&?ewjͳ/f_;ogӋ›E>16siq?(۹]elZT~yo>0٨;|M?vW7?1?0Qgoנq30 q L/;o=`h0^=4UluO]Mq9m>røG.,™-OٻCCҦ͛)Lt5#'+| , ߟd5HA_J.w ȧ3ˑf(xEM1Fbuq}tךxv0a;x^KZ>1($ M=] ?k/2KvV;GJQ&l ԏ@c:mAYıHi K+R= `-1)驡Ρ{H-?ׁW+EgZEA84iT҈58؁KMTSCozhP5q'( $Wri-y*m*颂-wosBwS:BTP1[Oֿ 9WP土Oc QTUX:R9c6(POɻ]s-jO v5*c}[oU-dlT`h0Ku,=Y6#  Fc&(|YȪXGevjrz5ޅf'!{'lv#>~ _j$ݨ gغL Q2Ȫ"\];͸Iil%P$+Å86˂{WܶtuIO˫: FP:ɉFu//>-YIuCQ~Qݬ.WpR { );K`,8"1TIba$Bb0ՠa;'ᯰPGd@Y@z\l*"p "J& M7{,ռl|u;1D2=ubkgD(쀙q),=Xc/u2",HxxlS&3w kk~`u0֭iQ{;Ts>\Ay.' ՠK~# $g\Kr1] ׃}aԂT GSkUigx0 '9פIj:Rjsfȵ1Ou3v1ٙ9߆r!Ϳr3:#h~u6ђM~ /&ZT!"XJ7CΜV^ͿͶq*rFA `hn)1H4--$ NB<5x28 P@"0zL*s'fm?' ÍE9Qv5Voټ2qp:|# T)Jk}-r6h8I厗SXu(`gE`Ci9Oz`RUk^(" g# 98>j N`̙ ?\[Qwm]wOl7L\z F׃dɀBr (LƾQk %I.P!(J$=R:ABDN6dɁN~CY1)f`;$0 rtmH]>rA u<7cZad`(8qBABbb$S, A9,Hc !:N8m9lL"dP0pu{].\bB jI=bR CXspdΤAMBrѥ@)Bx <8 ];݂dW`t~U[ck\~5})tZ6>MMҏAoI~ˆ^FKyd!T(&0+Cʘtk KE۳l؉uQW`ҫ*6]LK+O\ ~0 㛎oov!EAw ^sLs[ԾVV\ATit||Yg̎1;|>\;m33VOkc(]԰^mJ7 ڶܷ +FnOdKڶ xjt Q1!Ix'Wu*Y ]g0'Hs_31X{&R[mr=?i/֙U=Te)6 U-rzAV=ILbI=͖.v5u" 뤳k}-"8= |3VԤq8i!6a a&g"`0ScI]V #;+AGV`!﵂V͇PZ[>}ЮtS<?˄oȄ\ޢVgeeJ}ʎ~ n4i5-IQE+o,ƾåwr4ى` ~?>!Aa7rS~؍Z&KmڻbH?L @Y% 2Q; H>:)s^UpH1 IS l0{2﫦wۖ smINd0}}yi鳲·DP=@Yqvx8)e ySVzҳ)MA媫hR8՘݌f9 8W)Ӳ;M6{wsLtT3U%^֣kdX =c-oZ5sw@\X]OYr/'P^|45<=FkeZ-BH)ywn"[J +"2M:HXGOYB8LViǬQFYJy>% Kȳ| >Upuʆr B+-5GuHP95ʃ&ZZ$p 1fρI/ul03*4m4#.D*#RO*dNF03,*͸Jb,C #AyAP~,,0 >]_ ̺]x1Z0n=#)k,uŌR")+S{3TqrRuNRIUn\f]0} e0hԒ[ݿy'*H{KD]rη|:6YH*CIXgAPJ)KyD ц Mb.83yu<:j 񹸣H05 IVVSp]`8 v?oV W%ZOrj8yq16%7uIur9(F|Jڷ!}+UgjJTv׸j:::?* j~$Jau| g3q^9q'/NfbTsqOE]]2~Kt`vM#ĜN/u;_N09#Q~Uyۍ%-I$Η5Ö7ai3m~?Q̧x׽ pf3Z9V'\겾fC:הqFr/ ƢmZ{}R%WT'>ՕuzwȎϿ??;>rO>O?ٶٓI0 kǿ=iTޤinMCv%MvyIcJ[- ?]^녟;C=n&x|}=;5Q#d+o׽P+H6.^3,& *r*ny>狘tY.T] Pտ&>]=Xݖ&HxW^˲ZO$NUi '> ATW`d$CXM< @k`?-5_H?Rz X#t4HX g&9P X -̵ ݶ-#}A-2ҐW TE=$ C`hy1r^@5rWģʍ1 m={#K{!p©N()8'hMI29B2ϔ &Zn P`@$]ԂoEL5dTF!:N9 ِH F*D,ʎJ =IAVّc4qHb!V݄>lkkڐGO=(ƚwn) lHb ]?l>eH_o(6`ڠ- #^JmN(}HYA$VP+tve-\2Lj5di1BIiѦɦY/HNCn#(Qn:8ꔰ&-XR # @rfDr2FʃdmGscE}Z)ꕯǍI"JFǽO).NjJSa(zo%hEXOevk! ߺϕmPso=Qk6[+VbݑĪ`|*R]ԍJ$Tsqlq2[myW^A.E/Ke7onP 0l>`yKy9Рg/?@P~A8HBHKS8NS\NV}T82>+g߻D%1ޕQhsR6q9vgǒm<F^cGL}(`E@a:I$  zkԁm6Ch#n)bcpH :TS:QYS1հcQclx9[?@ 7j%hSNV΃h"NfmqoPT\\f8e&v]pUgo5UOͫFU8jKtlc"W6"duS]lά:zfH~ O`te]zu8Je֤գYU%K,YujW zgW>Vm7yj~"&OW;A ϣhoL?_]S8"hBwzcʚOnW N6#]FD\;Tps[{a ,!}Ag$X2b ϙܗl ϙJ_S겧o#Q9g9 Ko^&~Hk<_V=n4J PWoԘY O?~u^ƿ#[1:q:CLL'(A /i|DrϹM˵lig`nq;3pOR:u,{nX I73(ꚉ MD]KwfzK=\iyuk#aO"u!ʔ"Z^:l(a C{U-qmq5q%H$FYǃFiY%Z̩WJH& Z$|BDB&$91Lj!9S 7J,YAoB1r[qO_tWKR2 R^%OL7;{Ϸmr{`kː+> j $:KIH0Α'yRb@rHxYc iܛ}独Vrܒboy/o[nLĜd9p}V%kT:ng$!31Rqh묒!S1< Xɚض4'U~5oSBC"%vP%ؐ|<B?MblѤ;̦R! iˍ(m_mi`6t5۵w}۽xamH! L ;drwg5`.Y{qjދWs5ymiZ_ k3 r ` tLO^ Y0\ 84(chyy6<|of>qƉeGe-DF:!/L}i*SFZZF*R)+\&:y'~T]Hay~;}dT j]@ID %}W{eƂW~@$ ƨbsƔv*DKic*Ic3NN4d'jaBg{xzxb9=O(OfCҀ02{F[KERbx+^Y3|/SrՃ#d HZ9Rn+Cb(-:GȈKryX![&)b2P$NL 9O%Py'egYeg[(! .;-; %+?b^0Wr$s֍{=R}TqRח-XAK+< Y*S8aMRTq~FbōFid5AM c˜ :xQV\萬gh|^3ט׷drӡor/W 5,ڞ,5>&Nzt|6%)`tSB#,D/jЮ Fߓ/XtQpTEƥ@JHSǑaҡ\Q U xPJp DL &ႩDQ -H!DLbk9c謯(ٳz2c[r^%cì !23()"6k-drAVB2RA.B]TkƤW"Q%Olg5?gn6y6zj3vUr/M1lWr?!fQ@krG:rUNW׹`ca~vwS?P8|T/t׷?_lX[XlQT楜8 LWc1*o3THY"!(Oڄ9F:Sm>#Stp~wPDTT8jS±@4 h&rAF0Z&y%qɂz˷4:P1 0+698|+QP*43A' ĄAax m{Eʾ" KRiH*PFaJ*G"E^E" IQ'6(CXWDc(!J TP8iR&$UTTާܼAs1Y4b.V(P.))lD1jh{;ǫEA}Yubd:^-laQuXOux C/P.ܗSpm$ܰumNFkm:)/z=7A DMf>5, ǗG3fzzul=t~_GmЧuG{Ng떗O]XS;mgԮK._`xYB7I7;40@MC2^%c/#jLF ,߮aB&-\61 jL.ttY߅2B0G,B tug*DDq*4 ^Wz!v(]F"mV odKҶ iz-B5CDž{/)& ,iwp&Ne`@ϞNb{ DSqzqYIge[hp M3g) 4]W8okz;cM&yY<풍fxa}q]UiAXg N\kzr3MʫҢF%;mq҂o aHB T&e `Rt1D}IA $%iMR)PF]}avx z >57ׇ 5xn/u `UWtK! =N.A{ 2, l+~>դXdT{ÉIeH * U!{]":~t"3a":~m{X.`eua1QADR[I%|A hZTnꮃqLqiCR+% (84C&1rp! 9n=7'r}G'kϲ ȸ:>~s1;zo{9px_F.~g1i/;M~?+/::Z1n0q">u5" 4c[krr;+׻=^Ğf" Ī䍣ʽ̥RJpOq1(|Jէ2!Z{N?εH~m~ޯBggŏ*Ʃyw'~U)wobsr2ߩQ٘П:i3Y]kOoƗ/*fA9^rrܝ킓Lo~NҰq'!iּ 7,"G$? SR|ztg;^pB5]gٴkӳ"wfwG:ʁ8D`oF3H6N &J.B}rnrחAvt?}?~)QO`= &4n-x`kָlkbMO==J|}^@ژȭ O_o~|΁E;O8IMY:a'6դSwԙC8;Wf!>@hG:P |r$n-]j&6K[=ײV RPN} gQǙ"C`Ik VF<12.6FGЎ. OJo۠3*gt9C!yTC ehD+9T.ONTx57NuL<1cd`Ģɂ:I"Q9$L gb"& Θ T0QFu8kh{A1۷;/ t1T4tF!/Y3 _Rk;#JXeƜٙ1GP1D˜Cp$ 8xc.Ch1ޠ1G)~.Zt\OsTBu/5#֓ƐvHvU{mg8O[߽) VT OL3Ub2OgNG-)Woys✿d[h)* +ϢK G*fp_tfVbVeoPU"<"v+~< ®2+kn$EO;iVFlmGOmYv[>ʽ~A]l]ST L$J+@WTWȎTG*F8uUE8tuUSW_P]Ym9&uU vsɕE]UjT*uB12a1 Թ |03/^Of&h/o2|vhy)|obfOEDkdIIJc63ɜAJG UPX2!3Y&J"!*$TrcA! bFD V.R ̖P( .E PY @3H}: ]􍺉J=~ Q5xekRtF&Jzѳx56c>a ^9U YA.-̉ XQ>$< fG aͤ[gVT#-H+oBVxip3TÅׇǿoCVK;gcL#5$Qy @Ό%S @uXVEQ\FX%TXzu>&)JʙHT "UiIɖFbSE|:D~+\QBT4s)ЁZ)r:s+9$I/.)A:2B%G֕AD)Y|d)iHjv'0 %v-Zo%V)[ QK"$SRE/ZVL*-W B/)V`$!K2 IFv$m L(tdދ¬T$a'c$cy ڕ7كX[3XR`?h3wd2F)GpVǡTif\v:6}~ fn 4[K,JmA΂/g>OyF{-~y/US? .׏eB@_2?iw4T鱿B33in]@hZv&~L*sZ2|t؂Q MB4 _sI Itn;a/wG G -lC6X8N9m XrHHD 2B:I#7n2ٜ}fR $])0-!EZF1%g%I_wڳ6(ly5r ԍu!Aw;qۧ" 6hԣ[ n-urG7|嫦1.8곚evCדUxoF}ɽ\K:^QԲZu'Lx[uíG_[7 7t:kEX?e- n}s͝Oryz~^;o׃]9{ 'َ_5?mnm>]{rv8 ~>u|sIV=lw0'r=9Fh+Zv6Χ(/)*cl{Kihndz6D^~6Dړ0bg¤.;B6T! lHD^@S,)rVvU's%':mEx'R|o6ro!)*Ʒ`.7Kοٓcc*>)\s4gZT~V]-~FƐh?;飑mˎSiW;`'p_vkWs+$Q$d!I.J }B2@ Qzс9.o[5mnCܺosﲘ9{}V ԯUi5]s.r}E볯͓+=otgp|O۪۔Wl*%5c}6V;t vh,9jɃE$I'cYL,ӥH0E'2NHY]RN"kr)oxf3;0g}?AdC +|gԳV lK8X;h;FHq%sqL=nļ񜗘Ir ɱumR;x>xG/p J"6KL;KL;AHP-8%Qc|'˵T,%lv!PXH]YDOޱPZ ;BUm׸jUULWhO1ZWt~\ p>ؔAZXBA%'(-4ca9Q)[%j/%_XI[#gflJi5vHةXqw_'h新çal,q׏?=-JbAc-F6$\A(\sR~̋Q;-6$Z4B) Qg”RĶ Q޷!aB~u2ɘ#)ZhME++Ҫ*UB LƌL1Xcz [D$Nv)iG-a;EL*IN aOsf躣9|U Y$* dmR&g"KHAgPΪB|6=WXn/b>w7`7ߦ rDvv-Fu}a=,0^Mۭ?qqsy1:1af5JLe8x>x \Nn:9HY] KϜ3^gavX<ߢLYn*X-nIZ|NN.?C.Gw'K$N0'/ BSctDPMJ>5v_Nv_wsueitRax?ssJElxiPy#QU=6SkaŴ,yu NDwJ %4P*g@6&Fo96IsؿV uF>;>gbV,DBYm$7JisuX2j` nv㈡qRĊ}oa݆qh $#:Z2*)B N"0u"FWd(!ĎçAm +M.(Ie|Nd(UvA’6-D4 +H s6 aN}SjejT^|Y}!|Cc)<}jx2r1\W>[>[;3y{xVm5I<1 ~G0^855egN$#PS!R܈3ZKOpBdx $]37I(7U% }"UF\)x.oAypWɗ>Ϝic8Gj4Cu0ևO0iDwczeD-~ȦQ3"-,ZFȥo c)>ٰZysN_ҩ7ngw} g޼N߽<ޜ|u~7/|30I L¯;< ƫ1n'z\d0C3lV=75Q>{{.a nr'-8YM{Jz3_1*#S"TRK$%`LK( HD6$Bч2<&(aVx̤_ tV1!"6bEw9lVJ"P9]y-tL.іyGZG_ j^;yA`јVV]?(NVyl#Zpu"hr*L\qar6\=ŏP.ztrܹCѲ/i,BhZKcA3n"ɤZ@3A#LHPi/$3%I"Ok1o}ǧShASr[WGbb/k&][o9Ke湺49CXt-x<y,tny$<M BwI5rK*3K`idmۥL 5A0 fyå O=cF@1gz4‚So\T y\ɜzp~Rېͧ0P2 1KJI4LrWӠho:ClmxBWl^lIpFc0I$5\p3~a*+Ri]?]wWzm3\f:Y3`f̬jJLT3\f:Y3`fp} zoސ\A]zCj>v K]jmՃڏԮiy] NbS-rڢέ>'ѨuQ۾(1Edp`eZi=,UՑ(/&Bpmtv x8K*B@&R/5eDDL ` Xy$RD[h[#gw#k\& sˇ]CMw>77˪ZGrv%Xg?/` )U*NT;A51'vT(̝s 7X#@I))GѠJ0ZleF͝QF6Tp3T3p'%5rVKB?x#RG86x{uWS9%qE;P"L"%_i#7 hL?y+ δl\ s0N2g\s=qyW@)N@ u3YFB?n$3 ׋M]^__>4b]]౺O|NB`B :o;cƌt"ZFL&kNVHK;!"饳ٮޮPSIeO@;K9|KED fG{#VXσҒ+ut4:L WZ$9N` m r# T)Jp\ۀֶY-bT';JvՌRz״Wk%[㓭J ^"myjM,[4P'Jٜ5ފRNpXԬ U:ɵ%̱Hp0s"kcH-V cH._(֢ Z(#iAp).} &x#3"E,3AI[#gflKi5w:W1(=&$U7ˬꃵMWO~ 7SW- VȀΘ쵖-@t %^F^ґ0 .Dj#KP<頥_цE.qVq Ӊ r!XD iC:d 10BB2pGgT!1z ",iP΁J 53 !c IN:I$m%lB1w/ q*UʨIWR.T0ըNr 8d' ?}g); qHq̙t27< ZH.:W-Aݶ]IRѰ4I!K1{p1}hRh|8-Yz7&p~E\OgbI\@nrG]꨹Rb~/lTKq9d/H[1CcbgR>( l[07s`r&|\KxcHG6:j\0P{LҬu}I>yuŮP{MܢsF$kAțQѷo@t>a˝O !{Mda3vL!`$Q:H.RB-!N(XEc"[J +MiEe [ur&`0qZFe)A@`m*F삷FΚ)MsRmN|(L+|1۫6=!uvCkף;r:aX`:F$(œP J`UXNe9ıY!s3r׿)R9ZW#n†;"E 4:pba<,ː3 (O XqcPa/x-밿b`zFR8`uŌR":!u9#Zױaؿ4#H"Ձ^['A[cFA3asB"102M$h9ʝ m(Hۤ9[{Ҝ9>K)*PRV#jqm\yCgVЙ~ZCx.0SicJXWxN߻껹IeMR⃋3`Ca$eH/i< Ս2;ZR? YLV|8]܍ٻꍗ=|!FmzVnJjIs`#)B6YRgj;|Ixޠ%7_z:}{ssLwg޼ 880 'M$H3 DB;`n04okh*кէLq9m>røGΰYDWW;˕tg5!"%+| :+Wd)B%USEcwKSLHP.@Ep}SNuZd[]&6CZaF픝&eX:֎P.=&Y2h#ǀy3}TNX "HtԠPDϴI*i{OƑ_i lqKumx?i3Id;An-[dS6'Gzw;;p9@rթدhbد>m~-۩OٮP<kuk)j xzX+FЭZ-xfLF 6iO(%kHa~>ג)FVЄ~*w&ʬq.wD4Y{G5tjeyeϬW逰 ߜ <*E%KUIArI͟އ ]|!m |(Egsx.iO }(,6,r!Z}KܼO>7/ u{py7:#PKb%6Nkha?5R'L*5_TU_bRdWIgpO JĐ^8+~>=+#O^ُWb~'/{ =8&^nWӰ{A%9, S`/()^,dQ,}TYY.n*QgҘ:{R&p0IBE,dTuZ=zlb\' VG!*i {U^*~Z踬|Py]X|`ݬ̯Y)|٫Yi~Uf^ޙ^V[8π/ 䗠^N't^⿞;J.Q_u7r6^+RΫA& Ѽ<7*F@f(m OϞdɼ|j> ;<8D $2XiHMx֘Pں[p:\~}6r6(H:տF'@-9\k-Jj 5q5*um޵_zkOMw`̓,fin؛̛ebmw,Is1ţMC6hZQOo?:{FKs;m>*!T3y Lbj0SVɱ.gX\hrO#FqGy вlI<[ҒgZ:u*91D0iJmwqb*phvy0r)XHMN7(B)$؂'eg5p;ۋht1/ѨEpYhl3GH3E* 25DhyzNNaՍ#K` YiDe10i!1Sj0LȽ:X(Gd@Y>)aUD*%dG(jBici-dw<$;UW+a#1D2=ubkgD(쀙q),=Xc/u2"R"/NZ|olӉE>i,R%BM(QD)i 歜..(Mq2ek{ֽVݘ5i;摶c"3/>OIq&'\1\Jl5N]1L63co'(uʖ٫P.BAUۨ8|2/b/w5Lߌ)BIZFk .fFNy;BVYݛ lN7s=85yib-fU_}_E%ޭ5!XxZ\3ruT:a-@{|ﳳ}5a2@T]Gsyc`bҹVpb#V^,mE ҍd/r-qĻ7ZbժO]4^D W'Pdm_y[t\bUۯXM* ߸%kT.{?,ZtKeKX^r$g\Kr10$֦ K]QRfAwihbvgUyj0̲TpX+أZbUt*Ƚe3ӿ0y.4;w3Z_25<{QZ7x>4^dziVȦ\0{W/W+L+ZJbg?2JN 5)i*6WLˤ #`lk T ^:ۇ`]%4y%Q 1Rqjj0sgV݂a-ɗ >еSP*JCuuaG3n:k`Ʒlu5tC ,{h/-N[ZR{n#)gS )EZKrj ]-pYMmuJᾷd4Bcޖ åTj<_<ܵ~2?m-u Ҷ 7~)g ђcYD%9vw6V`ݲ3MWoCaoJq!APi~P|HݳFR4%C=J4bMu+^ܬzQ)FE/sLFr?^+it?(Uǂ=)iWc~?델Jjq .5$H\)Fk^rrҔ}+R.;AVBXp8{ԃwjGOd^>a5Gc#!:lٹpJhMHLJވF}>ttV$GLSʑ@T@!btb""{݁ceJ ˇ-@nF'r($L.h[bf2A+DM"'SME5W \F" J-O>ވ}WdfDܩ9otS3PfR.;5Z:uql!),&jL Ō^q^P/D& L֛L8.  : :&%VA )d4I&roenA3_@tUw/߃rrW((Nn92Awhe/SĹ4"ۙ( d!+&y2,: }χ/Kث]4bA/gd'<В izr5, 4z ̑/e2^U_<6!̰x;&95H{v O0A3J$:&hE3hgt'a =O/^ Qugb>{c |Pާ ܇o1pz)M~SqN\>}gWE+rd*i)dۺ f\cey4G:~Ʀi/٤uJ#68R$#|c<[K޵ۿBKnp8`A O+ԒdmpP| IMǐivtU:]l?Ϟ+M형zySY~W46 3 Sy ?iNJ:N.|v-izwuП}3ndV9 V;t}*>ИY/a>{3?nKذ~ u/}\!H+`Tյߢ*Fq) e JV솪YxN.m-ZFw KqkZ_uZ@ bx+O2PŤ; N4W0kfY!,dQ]׽K5?\yE3%׏3o3]Xis {nb@ׄ*C~!Bbѓ}~yo2z;ȍ|e>빵|sNgGwg0mm*b+S7HY?ڀ_W@dii ym+_ʫlI䌂.`Rb [ܷ`k%_cdSX%3$fmSDFp"(EqXHO$"?X{))q !&"7s$#8L; (}UH8S_"q^X,y= 6O .2(XYe:7@">4 f2Qhu Vy(oNIw;)W"[c̡9fm#727,&DIƙ:B22#bVEVXufL&́^^^^$^Rhq g $rpLh-p DyxH%y<hreqc*h%kA:)3"% ^И(fg3P)onkL4<>E#q^p-!A'< z1-^=^aT{mNjWłXX^$xQK!zƌƁZ1b"iZ+Ioy#+=Kxb3c/giAt*tf"[wGDD 0դ JRZR%iOd$yWm o'=su.GIe^|D3b!LkS0zm0(mouZ&*0=-G hW12#;Ă93Tn˘M͖q=J9-62vgs` gC-G Iz^rbg)H\3 PThTHF0/2'i>"*O|M(E.qVqL rJ)+*Hcrhec}E:FHH:pF\$&E:łKs1H\JCj!5Uk""ZNJh0Gp?sZ4p~zCr\! &, on'bDRTh8?e2 R͖.B .YwZ:VЧF=zWlEs=|z]> #rJ5h\ihʢŹW{z)*̌-k՛ԛ|0 [sWG2 T]CƤZPb}5!6 ^z< %7x:O&c;|a} Z.4snoBD+M {BzNM eFsmraA1ZjVsj$B+&L*Vc91f$g:\(\n.:t6)сD2'Tfi%1!9fAP4@2 > ឫ倂{arH뱎W@ʺ^[BC]1cA0j"fEAAmֵ8whiE)< +5 a Ӝ)ՈaH!(#h"AWvY9 /tf,-QyNin |kw37]o [=?OT@/F+UަCS3o;seuTYTv13ê3;U ΂#־Fwv8ia 4U[tv)33{<ޜ.{O%P0 FUtb dT\:xya#2?BuE&ۿH#(I|i00>z(Wίc֔F,Qw_U}z||s{>{ kCiݙ۪4Y׿hq.9zW)! Fg#L~=SU3)rɝw‡Y`, Mŧܘ/~u/&}KS,uwɈ/?vn/1ZX6n!P`JCP'ԋi{uGtV ZjM6=+XgԥX9 .U}7 A$qpg0uH|vG ҙ[*;?}yuG.q hgIY@hjGA}YӚ)-خCx"^KX(d c D߭2M"2pQ>̈́h{\hF"tE.ns*OH ~NB/ԿwI+ܹ5y/)&O>A/J YjccXY9͉3ʼn[7o'͟*0J5\HS ޲WEXsىwnST(SAi*k\o;??{ۺ"z@Z'"ho }ڊeɑd IɒmJL1fvGշE}OUw Z7;d^+3x@QI7u(aA0yU8n,7׳`Kx7F ^' .jDdR8K<+I ;Xp[Yp-8f'/VYLFc5BrL0`˜ ": {awV))\!֥H͂,BFcBôrΝۂ{d'n}/}1mh{z,m;F)ג[q4k<K eR`}@K Z5nq~+6wɆƗzV|ZSϕj^ @Oi/_>}[̦zhل(5ߑ%e%*;b{fTYAHM#ȵǢՄޡT `jS$aG8 Bj컺B**?⒩f1׉ղs7~eꌋPdM'4 d:_|U ~| ThxRx*v5m1]_Oqr_E_)r^ m3)9ΪҺK,]r LYqL`|}WErHT2pW=վ h&5J2MxvPWG])f+ r{4 UG@j-黺*1JsF>"u3r<'BH.=uŊT.֏`5$wuPǿ𷳆9b4G|܄bɺ ,f yIKo.t2)KVi`yj9\-'QJtdIUUy8A)rfND4^5v6*G5G`57X!< , *>_Iqxt0VLU6LrԴ-}6N.'ϓ xz IQܱ]9(^]>ٵ{_ i}t4YSKIl/w0mzldZlMS f0ɼZI9^y R{|Y?UP5SɮJRXJ#T)mUUeϡ*T=P'Y))Hn ExR{I9p.043dI^円JcYm2/E8gv3w4c1^+Yvu=՝t^+sejb~q,Ix~ \Gs_fðBHnNn<q&| iUW`?jrCz!rҖRweNHGDELl饷ef hxXG+CKJhZ)CTQgiFtṬP%FX2xIt77ѻ7ݣ7݋7a:ɨT%kϑf!63ULz@'-=uZkԘ!2|H%-!b1h+n뻇?/c0~:\* ?tJ.%%33xCmJ } {KHeՙYJ{^:/ ':\Ir: x,do7$#dP sImЪ 4PBmH2cbD-ȍf\TyĄV"gYuJ[ U7Ck=TқFry*5jqգF.%] tԦJX|#M'޲NhHMKn]twkd`rRdXa9\ҥ,Ty#gZIO yA8uί1f f$+cbs Rbm:f R49 ;JpTך3rvk(]tag.c]h]xP]mq0-աFGCsvJ7O?Go_45kxvmlx0 h9v6ɕ٨3{̄|rHm6O'q͸;ڬ[ovTo$`h 5 %fL8Zr{7iQ=r6KjA6BI NmVbKE,&Z9|`n/:` VuԀy>?.F=rT3. iB֮'q.&}> ̚.f^ "sb:eidZa6j͙QLI@J<՞J0g Bnp䡀wZb$pXɀv-Q`۠0SW[^jr>ry4?rYA2A0|?|`Y"W,XPP;x˛S K?wbtQ/R#@";mq(͠Ƿ /-'->h䶩j\UMmA;Qn#-5Ijcad㞛cKZ j&n2Aͨ#hhu3'1*e^#q{-Zn[wJk5:׾諙Gm _WÉᄷ#I .b@QD1aʍɧ4mV{By6&& PF}S-no[DpI1,z=ڿSVLCAi@)Uǩ)fs;LwŵH5Z?5tqj5ˏ`Ix(x^)+1K>qf(|7&o;5z8U?.:0{L +bܗ$Gr{?kNb2 *'`;AH˓$խO']=?Ǭ3 ٻ6$W z]0` z잵1^6<-eRCR݃YII,""iUVUYYY_DJ> YLhxσ1VVd[ ҙ̧tB#_Gcb$6^'7N_ѩ5[`X.@_~8_ߞ/瘨y}`48w $uE- ~^s׾*uQ7ɜCx} ˬRfnV DO_7K>'8wMЈ p%q=o?QUQ׻TQ% }Qt3ɑn|P@E&c:{ȉ-ubR?&:wpV+G1Sua{WPݟ.YYT`#)Eg~l&3](Bb2(8ɣ#zIXyX*5mACQ@3WL%;%=5+tmisYdiӌ(:C\V+wR:5=35BJ;P@W5Wb_1ԑ~wZtZ{_Z))tGqSGiZ>w-_t\lΈNRaK)=3 6Gْ#lI& Z:uR81D0UoJmwqb*pȪl;a8ApxaRn1>P "' άlg#gz].1p~GQHV0cp9#NYDpN-^sJ˗>9&NOS,UFN m<J&R%-金H TÄ[x B뙫8. u)K.XEdJg"\B6x)و&=jm1ܚ5=q#I,S-&zF#Ha){a)EEcGcP';kOܷ5sz^n ́́t@UŔ͡yƐlߋA}&8V =pV|uVRWg[3:T Ohm'A7ɧ X&T=~xf7j9/nT :7a)EE,o3We-;{Y[Ϟ55PyÅI m0l{8izۘݶ'y6-Zg[7wm eT=rG-_6w>\^ [\Ux8KxA>S^mns3RƣAJy ?{OJk|2 euSWZ]\%*)SWs5O +KC&,#E%xWgx4zMϡyw.Rx @aMRΪSB@0&dt"WSӉZM]L^L?A1M)ct#:^ًz;:3w8x u0Q%6΀`dt`*]}T2?g sH{x޸Ăt^+40*KR"86ϊɧq=jRciVwcwo^X2tie$XLjAFWHC~M $DdGCg˔bSd9XH)dZ `a`{m0B>S} Z( mb\h$j250!BYݲ(œRn&x92J#"";R.6H!K5f#g6H>Av:]I3)N._o|]r9e]'>3Üq &52=X52KNI+cF&*Yo|H{KHggyCQ4 ;&:SRh`4K!zƌƁZ1b"ibj$Rc+LZr{O2b ]_, 3/o?<ʹ-H`5iq?{V5%+98͑tJRJ/mPARLz [HF^ͺF/SDN 8m5DH)2T(B|T8"X^rA;yUj_=DX{ّ,L7)a}9Zfn1tأ#%t43)%Kr/2@"/aN(EKf-D~ZeD@"XH>6$,6X.58JB$ QPe:!&(aVx̤wqW $2?MVʼ 5xd?= Z @GKi ].d#;כ<Ġ+#,Zp`!†.$Ѩ0h_q w(Z#D"28NXG:EԄ[@-6RJ,qγ"Dˆ^FKyd!T'HԔ1тFQ4`h[ʘtf Aqxw vWMCaW]ojof趙\O/6 qY\Po}7 )UʡT;51ƧQ; 5&`c&yuj`F `̙ rKllPk%J9,̶2g{OErU|0h Jsy9r9Ls4Q(8!@hP9<`F%bxU&~SFe^=E#iHI 6`^eu) ӀDt!"'Un\䬗v4 Hlc[MJ^j?iCTE.qVq@yІ( b D9.#} Ҙ6{6IJ!a,%hd`j X&h!}hЋ<]\;ͫ]@_ٽ&HC-$htLMἓA},ON0r`;tAݶ[siPisפ]Kr)._)0 盞ood =!"=AZ3E4^JJ8%\br(aHI| wEHWsW2 ]GZbu7)etgxS'L//+sg1ͺzq`⑙bnyZ1u,MwNJ-57lB4 2nC%nn;&)wKe|[nUl&|U}Ha7r~nJ_j[ݨ lL ;ڔo3Bx;e}d 3\HPm)h$![ơM^mٴT1M -'xfz=Nj^P~UI#*"ީz97Wx^j,3Rƣ|>=2ݢv/nWA"EzU*dI.aS̗{̶youmF2jTcrn)>*7wmqܸLe7Ui ^T&Zgk_֥VY)ɗ?Ff4j[seAo(CTd7 RZ:*X1xmPNY3쓄ڽ$'1荜 2,|gʜ0]_=Duƛ\ЎeFɛ╛gNK#\)B+EmFc5Hg͆5Pi kt! $A/' :oͱQNQ%*m}m=됢GbEZQz}n'>ł6ޞPzֈO|Z+tpqrQ#8#^)r/&!Z%j+U&Ѽ]z"mCu~sfXf_òiiẢt4.GoMyoDjo_-{QLHHG:Fv\06iku?3zr5mZW*'5j\:+N9hHX|qz/'ytZFlN˗j奚eӗ_;ǿ9:|ϯ꯯8/|ëׇ _qȑ7.7 V܃/owǛ,n0u3s~>fIZg(.Vz3hkyrF{P]ǯvM*zFi#L6\h2kM5WORkB|Sb& ;ҍ < gWwI0Lne:=2oⵊ՚RF /p$1o@{% z6J"E( kEws?yُL:-l 5$lr`X[MP+hMR^)[GFiU]̻"xj2D/cmraBSd *gh-$z32۸mw-Y8piE=Mچ-1P7C^Y cH x(*dsI TbWX0|wP5@ʬ\|.SkCh].~dr6F?Ln]kkq| x+¡˹ bwOdx,j M:F[V:@TYCi+YG,eMlhhCuR+ Vs, -![C,gNa!c]FPJ,"Q(ddN"mQy5fj3^6)EZas LKHч14)J_w3&Hsح7r6 'ONxս {ou}gmvͫg6-@Xk|Bk=&bkKKէ=۬U'7(i˯_>߼p~zz߮/D6 dt\6;g0:5/N>LguruYrnz]bj{??TÏmzQ?oN(׹Z~^yOQn.0:6o(?eE^6W;leO_s&dUs0-h/a{:v}Lb*Wg; v=5qBOcQR֊O!u5J[Nzv{#g=&;;=>w _FsL( hAeA`uD* $GJZ,rH`$쒦')AY 2Ț2VX|Q,oЂrà6^PofZ:fyC V57`7mDIXiM-6dNPャ҆%[ծ^O\$E#0`"$Bde㤰`"pPdEaQBc;cyz=i(}:@[!@@$d?ͧd@ޕqE4JРwa42:otQ"!篑R?x3א=$DM]'GH끐*!k>R Ovc fo4iOQ}ҭݭN^YyE}Pɠ482c U)הb<RlO?T|j{-󍬿-> s69TCXשXi}YTŹ\ Eygmt-fӆz;-?:D>icfss ^soVy hZ1ѴXJ|,-*(wZrchdZ?Exʹv+gQ"/6 &+ zM5):FN&5D^euΥJw;qYW I॓P;rL$%(&bz$#JA)&EFKPtlv@{)DaGJVg7r3K~1/FḀoOJ6he՚ͯk~xu>k bakMyD5LxJ.^WjuR\Zup{mI\1 јJ57WݏV*4WViBm=UW?}w9Om#|r4쓚fK $gg>tl%T*}1 EX.D$`\܇J::eGU_f~nVӧbdT6&[bP;W{6`3~~-B4/x_x,,#GCPae2TX* V +CPae0TX* V +CPa [{D'aLhN*Wj v$RipOP1w(W0՟vC;ܡn^}Y롐U,-P F\efnPs V,q㗗 UYg)*VnwY{)7w/Ȋf% .{蕒dKmUIV"4DME[:WT{T> CYwߡbݠv~n;_AOd8OsrkYMAa-0hDpL*A/H!!Tnm J$ 0 GOBiS!ɘj.~0#qxv1H*$X쇫Ztj>Yks *r\K5uR)D1)$$ e&HFR :ʀ> QcdR,> di I+3/cRK3cBQ; Q#:v֕$g aD 3:wKʩcʐ rA QfA&Y'P*&X:B(Tlak&S^CֆyYHT5%b<s/6* piB6$`3BN8hسC~m#HgZVOچ.wc%c"vNjU#r(&}.Ή[IxEQΊ5 x) ZC aYI} s;ˎzEo=8oK=ZJ|yAJZǮ%IObH\I,hPQŨ,dŤ̻K Pxvla2~[F4&2rXKX4Ob$Ë2c hX2PGVqX2e$CN(etЩldx.YĂ gf*B9C-dTŬǝP+:%1?C b֙62Qة`FM_ռTB۰61% j9&޽ωzϪ^w#QtIla>bZ-BI Y\%K* [, P`dP:˖XGvr=Z[~IlP+e@Z/W%FB "`a<$余7l!A832f7ԹߒeȎ!Ib !d1A%Q4Æ]hDRD[ 췆 dVTV/2^Lҷ'S ro]B匚1 J8 9ӽk}tepZ 0EBZrⵎ Rwٕ2뺴j $) Z$,/=W(@H*+9BOQCqBI1QV  ѩ[0,pZx"?fuXgL҄~ <5Xt iW,3_wZ$0_\TRX*9qD>@wzwNV`|e ۴kK/(~0J6`!sݕ Nz- ] &.H8kVSc2Rg bV[$_jPaBG1 A1J HĄ$1-+D^`eHT. U F:/xFSOmX( AJt+S]x"2`G[m""Yԏ/XA~( Ӭ(adNI!/+IAIt'Cx>4A .p_G!(["D7n!4Jj Ў2*vP <@tЫP%l@}ª ڜ׃bFtnK4rFNq:ՉM T=viEPA="nP\- c:¡g2 ҉TI.[utЕriY xV#+U9-=;g(,W039dE"b&5J X}1sɺ6Z<i;+Bٸ)罚5l<553  @][,q [-d0L&Sjs`*@mۿsFS]e Iwy. ,`p|P1qH(R(."PPHyDBH>"JRp;N( z-jd}F M ׺\-dE\"\x>\!O.DNFt-x `S!7BԴ(!RXFUTk CG=kUˠ~ +aQ=+NF"5R&Xnd79֠Qrw&Od:BUSEc?],_Y#9{Pͭp^-QC}p*fĪbZ#ӆ (rfF8pJf:t H+X4Sq#Fxޛ#Kx>h)j2#> 8e=sg[)n|%7L~avNAu6K".ń@ZĔ0Iu!ŒC,\N5ڬ[\*R5]w*;c70&^ _&,nN=ZJhF6qx5^ E]=I<%r|lmHL s [OɸI蚒_|ކNOg˧y֛1`|sDTaw avԋ(ӋnNZ.>[ȭͦt̛P묧nv1j+8}c9'OU띍&#?3ݓ/ymv}D?9v=t~:iw+ \Lcts]bj.o9tוJŘ3l+ɏүzVu$JT/qG\#~Ghz;6'0sz^7rᵶ_{a6 P;cgnrX:\vw<X:Rk;5)TX:bۏ9[]v,_Kw{M HB 2A{CSKZ<{%)>ў?=Ewp{A)Z`Lb ٤ΌR}Զ%)k J[֊A82`S\COtO:Xkssnu[[oO&_!ډJp_?A!bn\'aOc(n;g'+n?ܟNz3,w>>Q? LFWWcGD\UJ>n~yA|v{gt28Yk&кU6`n g3w[hc_@},Gӯggbkү56n=jC`KH !o#-v ^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^%^z0ҽ */#" *Sk9GзtHr9UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%UQ%з[ TO['kgvoi_J ;רdGa[:SGt43lnązp:N?L so<_L-ǝ<6ڒZ]0꾟f`!en 2zOx1|;$Uݫtj>dGhr5ACp&ϕmt b:_IqL֪g:tp++ylzV1P叚*5NGڤoSn}U]7Q&VԭW%|U*֋o2?-ew%I]Dخl ˬT6ݶHZYi0nYf/?L_fwq[>>?GysCiiXTtO3לkCVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjEVDjZ=Z/ڊ7yRR_;gs9Lu,˩QJuHרCJF[Q7Zq?xz̺1շ0f3 MtnT~XV {^P'8|m~\ϹbK9DQSuڻaI,V Hec"OŻB+ܰkC`vm0qvӮC77|)3!jDmlM24ŨCjC`=pilG0#Us>3 9{Wbg[1>~ u9N\^$8ߖz}\z*M|VC=0 o]JTqN 0K'5NXM #O 6 |/a&`ddg2k:~f"Fq1D6qGo7z vj(%ԽBаyw u!@vWmz1Mi"/wH^8yMv8T="-4Vq2߲03h미H'kwώaَ9oy;]*6')MMO<ܲ@Q{uwd3ŌTl&nMQYBp`7ŽugXCB_E_BՏFՠqx:hdo.x2z6Tք~K.VJJg52E\.5.qԄ|&USl\xI^-эئ{__tzm+@yбa=`)Գw^Q!ګ=Å aojל#?̷.&J78Es햺Χ;_}=AtzLuRiS&b:Fxg_zW<ǶzQV/3j5e+"GDJTZLuo;a*d(vP>DO!`y6xS_umJ2N BkW2{/QJS/7zˍ{Ք˭m6Ǎ,!x#dH޷mRBڌ)Q.NHRb6[- =v }W&e]s3ƺWX{JŬc.UV7#8)zF59%ƘHGg'K} 2}:eQ| zVҳ%j2Wۍ)D ҰmmN??[TpϧuWӼџ_VMY}(%[-lþ*q̫Oqt&OYud}3Jh<LJ$WeզHC” \0xhvx~T|l3%kKyH$Y? ls&VXqzN޽T&WU|͛urɣϣ9nDb zu|u~qFH-CjQk.έ qI\_#N |˷0ޛP=-F#mc߽:Y:0+ t0xҞߝV]$MN|5oge'Xϕ4WRz.#/#}e֕Gm.i矰ukN>/zv}j){U_uy}WWxgqb,md΃fT~?+i;nt_ޝ-Ɠǣ48:?~釷}[.?|UI G}"Xh({w.ǥPNqi?]/|u5Cru]S.U H7ػF$+Ƙ);32g/ n%Q7/7ԅDb}QbTUqNχr >_v.v~چӗYs)FQV#b eAZba߯[e8/ $?FG6Tߴ&Kۉ{Ȏc{g|tM<,uH?"u:_k\mzU{:60u ?}-7__*FؖPd"U&Oƅ3lvob:‾v-foB&rHDD(~h=,lSٰ1WF.yX ?Ǵ:+yX&d5$)HJj%$9#&Df& uejT_|bܯ6|z~]cFr{cյnVp)\+|\Z]< / 6żNiy|e|: C,9-hޒ^KȝvG57MӔ)8BGdH[hZjFplrG[Ho# A˄1RTf]M,A~J(;2٨!vI̹Gӣk4Zx%7>ssۭ'sgǵ[F2tm޷M;z!vdYA{2vtxzg>axtbvD[DnؠCwtIE=D)6Ju%ZOHR+ĐAB^WgcF55eӹ%BcD;.l;/|UM=*Oҟ5 }KQȼ _XaV}^ì 0.X|,N ;?zW=Vn;ЗN=a!gCp UbyLrJSZpt]~D~6a_vŚuŖoKBvF nk,FݪtKv hL2yPW϶鵩arJu b]]OyoWrG?2xoFp@FRZNJ>X8&`Cm"5X֤dd"c.IWCd(bC!80 XCK-,ܲ&gXUM;nl1AfuS2屽_[yK 1b 쎸bZ{0 ?|$_3p^4|h;ڃLN~!،BSfSnmmO.*tvlvPeB%F,1^9m mPjrճ4 }STQyfv2(*bk0\n2n<kB@{R^lbO^rU>k/ROvmGvC?J'{g+a[OUeqEkoy|_Ve*NNdpa'kTtGl(7y_s1`Y&ѭ F.9Ԋ%ΰ HFnlGƻYʋ`a7 a'Wg:/7h,.787f_9bV@U*bi6ZЭ.9iR΂^!rZP0ܷ/rZ-9F44'NʾJU@~j*U%z2g;bI^ .ǂݼcWԆm`B%4%L14z 4``L k!]-~;uW0rb~@P̅d&u]zvrg>g9Kwb6,5 Рº iĤkI)=cܭ"\η,ql(W3jt7;hs3i^m{*>K:6EbZ_HqDS(Ojm) Tx={HN]#^egYKnv#]~jF`#}hpd 86U:^)BVxmYKIGy)c98lܚZnGau,*;8i ^oV/o'NfO6<6ua,_1$-qttT#V|n[Ա&+27'۩eJ:Ae^ȹ+.#vZ42 A0`a3D<8-[~*dDCs霺9uϦsPe"V:r,CkP 925ۦϖͩjAc5.,iUBHSހx˸*fQ[,9Sonl/LԾyznût2 hnu,: anzYQX_o/b`#bMhix߹7=k&1uh6 _ї0ٯ*c}FKڇ .}늟R޹\6I)c.2kIig Q)sOM 6.0pÕ--un0K궭EcKQ=٣f0ڽ))jڽlW@6+TRKH) WdO#yu;v 4 ϱXknd6Q7Q6Q6Q(6QQRZFnPH^SptBlcc,'C\F_=QE*VhB?WζfMA B%y7myc>)w@>}ôcܽ"ne*"◉I;"۝bsz+S*}m©gg_{='OSr6$sa@4\1JhOD?'~O? i%YB<]@f]:T!dW l&Zc6jk8[eJ?!<ڊ1,&{2j P8CV.\uN? ,rNf>|ɧߟ~Xq;T޷t8ߟ]g-9٭cWѹqIV'2  }&ӵ˚ZdlvJtc:P&L":֐& gePդzw3gg}WgyA~9uGo))|_tzbjm $~HkGݣ ?Q$'ޖtΏO[֗ b^_57x` 9֟6 \z/KqVbTJM̉c$X4T3M͚D F#^S2Óv5"CGE:KpZu.f[yI5j/嬁[MYV8oqi^5)A-n'Q+O,^pw|['*=!$j򵖩1J}uʚn~[jA}q`T Q7M򷆓Gƣ`ea>'H.`V6Ic.$|^IZc AqTbv WoTHc6;jcwˋLp=^]7G]](;ܝ}?f-lZiu>^AZV\ZkG-* +ǵVF0CcL@5ZHV$N$B^˳e\C { !rLq U )'ɫ `f3i+HFZ>=6ÊL ` *|5Fg\Yx`jPw/SJd &$jЌy30B΋Bj仈K zRܱHn$h|}*zIbe:" :GD&VDhQ!Z4SDy" ai9N& n&(/T{OP$Z*yĜ('Ǘ&R[?OlgyaoRi>C }O/mBnQ_j)trr|}yuxbv@6cZpȦGؑZʻ71>qsЎOԨ:Пziֺ7mw7 9'A|:!Qo8x=. e$IuHB>=e0}nZY\,iCj3g1඙ݘuTF?O)};q3p=n Gm$h5 lE-x4eWCxCNz6ǕDw->fjc[n@xa?!qBkOv8h•0B$)YSTqw _Z1ijTB%< =q{ܥ}my"EՇZժQJ=J۩;°:6POi_I83X! H@ڰ?ԏ<+R?)} tG$U ^xɡLHZDpǬ3W >isUC:i $okx KUڛdPxAc\D ԣmq@u:(dXgkb#dXf.a6*Vsw+Cg:`Z k|4Oʐxħޛ_C~s&v/=M)*7y'N>$>^1^Ʒ[ 9?n9>F`;r~ﶣnm<[QiL4{Qz.z5/3ڽd z~/Ԁ"x)5jo;ݎ?<4 ޛ=k0vwuizzM;?. мnoZ?s}3N4,f,%{=lЌFp5nz5}CokzyDM94C{J+)ϮSXQ"Uh-B%`+dAF)+@n}#X&{ u]{M-A!2cP&ΉEmur)S9C2ϔxZ@>.!&ji.j7 @ @#hTYc3rȦ@ R~#|cI{ _8i|гR/3_.Sg<ۣec1s3#Fԇ[uϤ4`(;PVx*+au" ;DۨDT"*~1࢓,&e|}BR0&kLp& T<KI 16tY$J")!gSG C? J]BwFj:$hsGZ kZܞ7IH,Muxr(bcUT+_ ie|U@Y>Í"WIu틖o"z}AHrT1kl2 iD&6KY$DC6s'+(#cpmr0!{NG@(2m10ڱJ9U&t5!iKOuHꔰ&-XR @+%w#9qɿo< rHqNqOGqd̹QV:)M) 0BaOh! XМi"|V: ^"My`s bI3_Ë˰%*'TV|ݶy7 u(eq繺6_P E.leKҢRIRxuR] Guuue)*I&6*l$CRgG!Z>by@>bّx;lp+1 3\$DAH & !C*B:cmDp@M!-.x뜡BĜ-)P5ܢcc'SgyP7͇۴@z|[k-bR>] 4oӽktxb]o`gV%g슮g]>gҕi)L]ue"L SlJ0մu8׫ :J?nMwQy0]]sXaZ.u/ygw\wᩗϗfMu} E֭@ڽw|=^qtFVl=@\\\\\\\\\-́ssA?sA?sA?j JN³\'y=Ix S'YT2N|FY$lKF (%Xk`Q5JF (%Xk`DNڝwK9~jy7ׅ垤 JD+Ǚ%LoP,mף^^$鈡hZQHˢp.ɨEK:VZ%O$IJVB+;LH1CMК3L DՎJ,YA^Og6OQ&GNƶj8FQF5 VIHM&yRɘL>"y;aޙ);:{ڐ:~z/6YajaRi@Ožٓ}#kb?aߍu`@ JUV2Q9LEaSfGNHsNH88'9 '9'&Y9%RG ; K2PF'ubH4A>VHMRVL]oɱW-/HF2g}.x!T_)_5/QǐՔH{ 29iVU*'#Wnә{A@GYD`)j+jl1~ (X>Z`[}{3>XXh}Q7aعkz7]3C ëW5ӤBزF$5)'t080 L`Gi㭘\]UW@j=uT0c}0>zöCZL iG@Y-^jfU P9yԥr2y8sW5Y(oP72:t X YI 2M bIz!&霹,ǸV9kd֪r7'k:B|6~`>7{>c?~3[e;^﨧wcdLLtB  Ð8fR2zF+ϭ=0Ɍ^OmM1{JF6fJ剉ّ0b6g@C9WTmX-F7E) qƎPʺvpNKMͶܲ 8L͟]`0<| k{RV]`8J*cZ H 3g ԍSY 66h&Ǭ+C,[;NUۏqŸ<];vڢ֞߯mxWBa,ރ-(,`6,B0 h`A8  Z>J *d!2@Vtɐ-Ap$R_ȨN>Ff}85jb<X?vՈӈF|I'>wQh2&I$0L%/S-\_id' /=43>`!餹ɆK#pv'Zl8s5$t%- Q 'ME0z˘R_'EP0i>`gfj#8?Z>P\7;(NW}yY)]+oUL\b4itл^s 瞓&rh0w_CL}=_.(֒f,bS1h\(RT>f}1kJ\l1#xt^nbv /3Thɏh3wsT B%P."vu]O.bzZUOy90C֖N IiHlV"ό&㨔2r1p塃㚕{߁9Vp/mSqi]&9&#C)m,U5hσ7֒nhҫ |ތPZMT]GiY:PJa}OW~J姒1VB"ӻmfϭrljraLC^ӏk JdzkaZ͛K3å$lu-GD)7q~s4о>Q;47SW?g4}b»U`M\0voWDhMQw~XJ֕@=e2}ZYe$1 GSD;XvbǷk/uuU֯xȶU۞q ;Fסfz+j㔟ho՚jnH?X~Ogo|ƅ={wgzK'h&)СpF&"g`鿿 KZKz-vUzU|uYa1#w*@R~r0?%C":Iq!I/sST>UҚ[qBP1iT&o X"qO;ܵ]4sVZj(gۙ޴(EqDX ?kCp U dd10"V`c?FW!m9`Lq]=r4Fmٓb%>Hm5QGt]uB?7SQ!yG! E(Ӷ ;K.ɘ#\#83<+ @U @TfX5)3O̰vbks\{]r]ZɪZ-zFY?=]h: Wl /0daU2#e>)OlNy,OV F(&Ю9,%C~Y.M-J=Ӂ >Z\by>SElLlutsC(qR5rNvO!R]S;ZR˅/t58u/?_j`8p+#%1GGIaqB0 (ndE2u`\KA"Ґȼ$2hmǺDN |T6C=^kuٖv*D&9v1vIX^zhIDžBIvN$ۢGKGW:uqrL&ZtE϶$,vB-*%9%QuHrch۲Ö|?爛Sr]YcYJΓ o2mE^ٻF$U, HS/;3V:m | jEV\6vw!ģ\xo#ذů(pGø@8~j5!jNsG%12E8R[qC(Y.3ST+D e"Em)޲:ytQ] lXL up;,eDKu[SxrY?m`:Qӻ6~f|=*{ tN~S*>Pu۳dGz8#qM7ǙMS%Sw٨^}XN{gd 6&͸X1@U2c 7еg}~4s|'^cHE1^z[hЋ>^եH'DgB=ctL CNsRaDɡY3o:77NyisN8+]c~?{A\Xd$(1LX$kNr9e0TOpNd t4/Q*{`@L.g3:TTJBo*JJզQqk {#2\틸**Sih+Aq(7s?/2z 싸2*SE+AqҰOHQtoU&W2 Ҟ|0>+$x n/U&w\fUvT]}s1Oʕ'<3Յ_͟L@Q.D0Cad`dAΜ.`JBIt:Ve|)@t|Sn/qP1HNG5V\ӇF8a4!ē HxYd\jxYB mZ@ |79RdU՛!*6yn-ؕgU¬SD.m!, APoU O7dv.P!؉tCZCsJ霞Y,@hB'ubHTA"2VHMRZ Pc8 9K&J1\zpf*,RH bc%ķE_C{f6*Q#,R+e <aF qMDH<*)><丯vKdBϨm}~fSg ~pW-&6o|BWz0h5G=24_&'4 }[p%}!򒲈XJ [!q$9! ##I)pGqzA(ƒ*Y-sIs343_erPB +9FGm◛7I 2wvEBׂ(8lބg} !n!TJ҆! ;L!Aw.[Tn<$eH 8VKƛ{g ji}*#b?\ 7{duǣޜyڪLbD&θcr3IM6q,g!$C e0PuJ=Qsi)c$ xᬱơULFMzN) a1rXl;yF?2%]_Zv9w{]"tOw3iկΆfZcq)q Th-(ƭD'Tr Ie(@;5@I5uP('PŊ i.FUyB&Ow[Qz|>p2Y\/~uΫp7sFNz\ o!5-*Җ\e]33G\!|̣wAw;]UZN;Yj]_Q:!s8]#a,u=AdqDigPχ/U*߫u^#;釓?|??;珿pSNN?wϾu$h6 XC AߟeSMc{ӦOhdW+n5>\>z&)Vg[n8˻~x{2~Au>5OqOGl\չPwWT>U;!}1yWY|MWJ\XȨ~&gM𳾏2V+)u(ɳS4 é}?CpM1hO@b ^P?oHm+t`8")NR&/<3ɡbP_8Yr\m22E%H-9>yT$y˼4$.UQk PxL 5OP1_nFFq|}&[54 FDH^>[\At$1{ߋ"^z Z{ξK0`:Xt?ƕ:Bʳqj";F*D*~B" -\6r5 LZ'\l+vW̯k!'uB0GԼd&ΉE $0(CLRyD7oX3:în;@h>_6ʽ2F{v #k7:JqytԽgよXHƒ*1Y S8-$ZmA"mH ."c4a, `I!FlR\@FLp& T H~$/v RۥG;^3V=&FX#-P3WBA71Fn -ͥ Lr!{׃~5rPx9e  )y } 6Fxs)k;Fza^2jU hف 3A%`!ZL.jh{ Qs<(%^&GB \0(Ŭ` -cm"0Bl'p+f;հ{3!aή6JMoFhCa3sT Tb{5B)o |6zp91>1&2SvMqmfu&W@hcTbcpru1:Q A5 Ҏg$Ӂ煍󣐛u7 ܹ?@xx '!& rJ8uDS.h@KC($dvSfiַK ,@.>d8-i+D<͎gn4#r28Mv5E%φanqXnvW/Zu_fxx[^P7}"ODnJWNxS FS֮v6*W29bsp;VQKic*INVQèNN4z*S9)cB4FQ$F*n)&RMKY-G)" BN3rNyfi[<(l7n*%vFbb\$%EOQˉ>O(xDPTلYRB&EClv t4@ɕA =b 1T1MNyMZbA?)"㎗Jm֬欕-}m@Eä fsdm6F0Vx %H""?1lQy;#@ !E{5ʚW,2ǘkB9Q6}9a4nSDl?^*y+[M VIt: &Atuv6K6E1.Bł㠏.ww5:^֨V);ί>toVGye}rw}h=vX7⯵:cD1OuVɪ WVQB 6,Q}o4ML.Ml󶿽8~ݧ.U[mm#) \HhLHW hL%-niqԧ:Տ3%l4t vꄻ)b2M_ (p.˼箔CdZ뜮LG"DItrQB < ds64ߵW~gɫ\ߌ|>q9+lٔlnWo]ٍ{!֐ d+ 3zrb_ƣђMNns~js|ōLfgzdnjN?musr~_ᤂpƗ-YJ@M n2#8#<ə5+Vccs*"laV3m 7bBlU U!*Vت[bBlU U!*VIb+Lb}sF-e}KYRַ-e}KYRַ-e}KYRr-e}KYRַ-e}KY߷6qesO~ty96yGAo䠬}p7YOY`82rq}ٗݠ!Aȥ:2$e!TJW4",/YBUӹVp-?f!yXq7/7v3SÛSF V xc)cXo0kv-֞,-i4Қ'?n~q]i'qD V_шPm-?EoYH$CF$5²%#м)N8J|"IP6*H\@DB$0CMP3@p&sw"h8* Y@g|v&q> ~ȷ NJ܁L\G/_:Io,Io(TrXS'@4z#4F>K$͛a8|K`4]:o1A u$p/S~\M)vZr / kive;\W0e k2l52] A^ Jf:ہ){/=#Qޕq$;R՗d%yAP}I)RÎx:D#:f5=U_ / ~e $ `N[!+aye8Aa;Q0-3YKpnK^G&H8n.Hkgo8<~g|q3Kٶ\k'݋`y!?N )-K l4L0ȥ9da&;X-K:ЩAOhv|vn6ىmF z#1GAiP. uvByOFyFŔRjOv"i)ŃX΁D~a+jvJwеBp6=ׁU=g/UuKUJ5vEgYeibRK |R}INJv=8QIvF+L.+9f}8Ua3*zE0ɤ:QLy4RHt̊tiSA䬲IʪQuy)EFg%eϧdߓ2){7^\< pLeƽKbR”2-w)E\{L2G٣ ZƵbV;{O?v{؟6L1H`FQ/"3<2^uFY]ZE F*۪O.3NxBRJ[WL( <ɍRIKv*zc -Z#gqNKhW?qq ?ֿ(weLz>ݕi&ԭbĆDI-W/G&()z!֖f{z*hd K`Jy";Ђ)-ǺSR2zF+ϭ=0Ɍv5jkr{Vhkc\P #hs4dq-s Fٌ뵔8cK,e,>)(*6mVEN.&a#}T8>_dd) Y+.08J*cZ HP쳬M1Vf(5 4 ؔҝ4D#\4=f]H9̂mEfab\ jۢhui LSPK@-,`6,B0 h`Ai *f oI3@VtW6hRIXJ2mZ#g3F2KX]Ac[D"v/g9d)&L<39I`4#L 3ݢ3 mH6\FDጏ>XH:ind!RH4BIvQնٌ7 p풯5.U˸.vӐ=x*(dj_ڋK7Qțh䗉FRU1~GN~?k!O+^%k*0T%VIC6JwQ}_֝+~{ZyS2 ϊI4 `5蜊ACB12` )5SOq^vb]ǠgwY[au}Cgq6e]|x%b 4P3 ))6bq*"< "^^q 00dZLZs4B{zA'ØJXBA(gEud_0$EC>Ew) itF';n7SDwkjyUG@;9^y gI(CbpI`DIIt<4v=|q0C֖IiHlt+ό&㨜AbFYbC\OwډwӖ=i'͘}NSXac=#O03%R̰R=:K!1Q(*n3b^)/ErKYg8v1T4_C}'.+Ʉi@zxh*J7-ѱvjȹ1J͋BI?TOFqHM%7)h+M1SyIP Z8wvLN g\LKƕDXV\8-YC^rS7$4^ U&% @]VL[e 4at$2B&![*8k87m$IKU$l"qǚqO0͋f/-a/8 GYXzm I |ŃA2 eca0t\K}I2&:/T"98pRBd)Z8rYfAM,@'H 6N5'PnPl(:9ǐ9}ͿV(0iyH>59lt\L,8*0G~|;a71jx(_gA{$d}MJA8 b Q-%'t)Ajqojrpc`,N{et73"7^мqJi!sa5?0Cϵcc "u^FK3-AW+rG@)h8-uǗjTy6C3-Z͋]uKoMO^UnCiՊ3?̃|n+Np4iV@|xo54| 5i #5 #q6'2e ˉ\?O=Q[?!FmzVBgq9퐖2RV>Cz6IQ87Kk۝jSM<^蟟;߾|?wo7f`- M$h~}=[C [--chS/&|󸊙6acYmjɭHfoP"$.{Op\5"0#D6ټo[M5 b !V|(14T&%<7+;qesG׵G&]㬼]ǥ"z[`'Ik/HwM #9R71NNNO_AЂI3uHfOX H4^ DRnSt{F\բ.yG;! E(x ;K.ɘ#@ . sid cm@SaVaE|ڌ [UmvZ˘~ZjaΏR\9=$5eI)rVwY鉯S+Cի-1P[ӅоKHEXs۹N{>ɫiUNVZVhw\GOyG;%,_no^>39?ЛC {`Gmo:Wy#zčl"|VtFGo߽GFerJJ*T 0H${Y8}FAT" Z8Tlm%~+=bi$cB+C>\wd!vpR|K>UE]{H%E93挆8cԇH^HS]+uY#S -"TN:S%8s:BKهӜeEMFi:KFrl}2}CJ/Ni zhpCa ܙ B9" .1mhldK~f;Ϗ^ƅR\b1ˇO_Ofu:}LצW?y7ïJo@O5nGOc} ;6k 76z12+J/'3E +|93Zza0:_HAg6*U8Pz uIYDtXJF82)O4[Jzl;A1(Ky#&xGR=XQ^F#(-sFȓ"ŭgBieLSƹIR0c*OQF)L+Mx*38hG;zg* /M՗SS C ϧࣅGw<g-= (ק@w@t'Y}>P&-Ȅq .DxІ eF1+X#@'(m"0-&ΆFn)-ӻϛ^x>ki\(>] B^_ފ+7//U,4}Zt\ٶ/oi}P^S ,뉳WԙYJ2D&8zPhm+"D ȓځxՉ)t,/k7wWFYw7--t=^\9_aAW>&[mJ&PcpruP!bщMntJ;[6m |&<,_z<});DqQN 4@4\@I^ɜM'OP6A ڰ,O2%߅h9{!͸*'Ӛ/ 8ho+oO%r1=)2.?dr9kkÈwАksur\{2n|5wp4Smէ_QHQ($Sx.*3dAY/I$u:δ`p-KSS9@H<уC28m }DF@Nu'D"~:G9~h O-) rʱM Z0G~3Ⓐ\hΡ#fb5*!hIpM`voEÜ j+PTڳb<[Y/0yO=^FC߹1hḵ\ELA;lQFbѢM>*d/wv)ߊ^Y48%ik.R-8\ 3L_E ݠ7@2v,t\$h`.&]{wl66{ҙL`"Mf,Zkha13F5I4J}{vXbk) HjJr~K#6Ji5  Io)F-(&fE@h[X'N\sS>,W0<1ǚ P|$P&>UHJ(GhOf"F/kO(nN%Vn욗QH^Ʋm);Wkyic! )f fD=դXǾ\ʑ*T@B'\ \ZQpNvNm~A_UqT1+6s!q$6)$Oqǂ#YMI8 ! 0儡6M/. 8n=Q./A׋Ry>4VWsck#R_ۻOVG3.9o ɣ1v&M:͙o/`$gyL֝Iň?[u\nܝ~a=u\k7Hew2<&Qg0z|pqNГ'!麩 7,*'!}̣W?Cx +#{=dSMϊ֙= (;KTokwN6vs JB}r`r{7]T?9_~_?~O.(p G`=. gM"hK {?EײT߬kbMѧ~%%ߧG/Y|ɭ';}{?{#'8=2z:a+(6NI"ި3+[Ke8ס6!> *vPuM{ubmsɣܕc4weV SPN}l g3 nG`O W!v+#D ^ƍ[;A'ۡ3*=R 9PۜIt9[>WT)̈' $oex KUڛdРxAc\D :ԣm{Uꤘatb'fX>%aT|cZOVKV`ku&sf/e/PIpn|%8 _qUT!J"+V.orS&~^Kxbt19rOV> :yPFL[1T2sb %)W UӒyDٚޑ:k9WH3&(t]`Qu bll RQe5cZTtpa1qCe+tJhǍGZXO.sܑP` }}S\[U`m-}>-# ( GNY0(ʅ3- Qˬ+V KQzLG/GU4`\ 4L`3AP0-&t4QmDJ:D =\heƺIERF6w6ɼUIqeaRL|gZ8Bb3vÝ7p"~ɊdR!)9bRė&E$63S]U]]+I=K(ǖ̎&2C35Z"6++drƁH]TY "` X"%cu"p;&B<\:%9{e<MӺÇ1+yޯeIYWXE9ϝKh Ll'۲A~EޒxW%+8#~B#*8>\M]|*~rD섶XmZ}0!PB iE al0|Rj@;i!- I%>Z7h˵8P>Cr%Y-FKPnG7W)|MUa:8)@,J) %w+93"9#p)o rJqIq($."dEPW:)Eqčv"?@ъ #~v q2DH43Y{9t6 /1OhY d2 IruL1&& 'Q) (L8F ՁKjiP{HBVLTq * 7|zcz kk'z3^l:0;\%L+_^M}77? O5xUQςπZ1Ƭ^@e\&,|)\Whnj8&jKb2aq)&KTh176qUѰCVUXoRa{X1B^ފNJOY߆S Wofq;Br?JiycV@͚wo)ϙ]^;}fqwivۄ]. 9蓠ThV'6XMȓ<)@697H"[o_ >r5#z5/6jUep,%+vH}[o`JoZt5а[XW@ZD.D|MVPT^]B?4'wiOҞ!6r:9`%M\(+:1I$D 2 &01H0FB4Du Z.|{pg*)d$SaC9;T){&T e=ʬ-޼Yظ_w1>6mf6YFLW_^FA7-Kp;`j,MH3!t`4|yWM=[{6]yd3_ongVKe,-ljZfefkZfZGuD3ʞ "~&WsAZ{H%]3 "><ڊ*vjUZVŮUkUZ8Ԫ5VŮUkUZVŮUkUZVŮUkUYY*MP]jV궪"T7O oi&2Ӆc<yC6\DwO1 ꚽ\x}/'D'$>D"$"٩=YfZ.gG-bNj}w\ j?ևy[GVJzY(j3IIаcrc'Л>48n nZ`@|\JauN}dP48(Ji"Y_?nv`Sf'')JFNuMsK1;*~kF2HL+T3sI>Y[HdQ!^U(*Қ95f+Eta1SrZX  L)u6&'X=]nnFnqud-smL(p"%2%J24*H%ˆlC ;PJ{٠{ĄEJ%ڔ.EvFð\.jmVVkU]-5!YI9a!iHQ& IDƉiۋ$=d @/QׄHbpD BFTG Û]N <+Sшc_ȫFRS,c)VIt::"ZF6: :2bJ}(6)Ms$ UTThO]sTxg>9;qɬ@)u},EՋU/nePMTRiQ?'L2}edbCSчŸc_}XsVYYMQq5TBEt92$ .N&h!C`Q*eϫˇ/ 2O8Ig$ʡJDT54BSgG77F?PcAy#yW"*AkeJxA7H5hLwtJIv?'g&7Wnsjn3Uhny_\oԇ'wP5m4VZ; \Dsa\9EYD9@ޯTHV5A4B垍bCF9.!wRL98)N^EM4W(4{BIpE~\ $@Q`65V7͆-}.9ܗ`#x|&ox`Y[mg-ΰMMxsLܤc%d>ox_vżS\0G2%j_k_#7[J:ԇFuo)Ҋ`%D)ASpm\vjD J@4 j''uo)k+p@ (ӆj)B T&CԳ|jQ12t]UzE^@.IݞJ^1Q9^3-K=..w U<X8x) 4!ILPBED{")l94I s]oHW vh}2Ivl&;$oL`YIv $uZW$ږbcdwWujsh&/u㜟  "Q* J3ːN@K[s/ҮFAϸ4- ʯ>[L;I 5>?58Yr1X24):Iݜf߼>YLoaq8bH _8W6io1V`*4I{\ J1uf0}s<<mWF+aUiC(dZX.L^XL.ƈ70n-wVĝ% a"cq]jjn̒88떧.usQW4*'>:̵zutW/wg/ߝaΞݫg#0,f/#AIs'ځ[]57*A׺ַ|}:_yIGNYa̶x-ҏW_u%kq&&Xĭ̏A/ǧEjTQ%' .}Bbc=o@/_'2@~+.q]i͢;";㵄nFivF2/Pe, *0wHsIM,ݨ7nW1>nNdPq,~*1GT1VnNIO k5];##mvGF1a{% G ],NG iOLkF0_jZ*6*'OA=rVjdƧK^:y-ك$aZ&ŷy&13%F炉'a׎u`y <@rFϵ,wJ 0Û^fWw!nE5ufXE!#`k U ϬW逰~sVk/ԃ[,w/C-3SR}g01ᷫ/ Y:)7#:EǐKr%ε&D0bwȍ1vt,[G-ɲrBh-:zN‚mfiǗJmwqb*pcDs0\"V8m0RSS 1>Pr""' خm>F!6+2*) f 9g)UΩekNb04O AIR#K` TtEcHLaaaBܰ( f]J~RĀ%V*!Z5#z2FYf[lMGS5Q'1(+%&񢑱=~()\C kxoЌhkx3|@RvWI;+#)6gFW9š&?JBF?uWoXvM8ɱn,czm!-wf&dug[t䌂*gRbr[7`k5d`m:u7/X0Hc%^2J !Y5xAQlܻ"8JQ- :߽ޫ흂=*:Vs^ kOnoV&'Bܧ-j+ +#|ʕY"JK'FB&jNQZBo$lm%հoY(l 1oƾy FbOS̼&&͍399Bj0eGp@ə!,.;0VMI픊~5@Ոd* rZc!nP>D%ITJ,Y@(X:4z@Y&e jaBU& F"VFudXDc0ШATXfm䬇 n&d"`3^npf8s-\΂b>Xq];^w2g*6K6].]|V( T2+ f "9A"4n( @NV݊5UB[e H 0Q*)w*=A%VJԃʥU<m4 &R/5eDDL ` Xy$j#50o{כ=n?{T֐~ ' Mޫ!y3h*#S U xP3ccX3Gu`e"&" L-S3Ϙ1ꪂG-MtZs}_Qav`oA=0"S);b\&XV}..Q80հNNQ1t~O%:ކRgtdu;aF[/#*R/íeI_E׃*x4ժRq:&?~tQk(FVD&r5;UUxkv&*nVGr1;@0A`U"CQWZ*Q9}FѨGDRW"e*JJ*Q9UQWH]1JdSv0ۡ溗Nbey ஆqqƈY㎬8{ :~i} oNeM&9ϜȘ !eTfRJc25֣Uʋ."IeuWuH^S(Z t:iau-]~Ӭk/avEN>qT C wy%.'%:@DRrcv* e}Tp%R -Bw:T9v\[JINFF#DÂ_هϳ˧`)q@} ~ Oh&Ã'RsL0L ͉۬Z]V J9FR '{>IX\%j՞BDncjjDV\f>Q{uwu䍺z+y-Iܿ7n/7/{rZ12푍Sj7 *ǭA~Ι#1^Fug N<'0KvOLCW^IᅇXiR5h7Pm^J/'FjiJ /'qJL!Zk^+EJHGxnQ9^ Y{﵋0l6򇨄WB߳:$>]C^Sx<F<j&R˹ߔVD&ZQy17eXvdɴVȸ* тPpZLjES#\1`g0I@)H@*(c!ε|' u8TxGğFTɜ>âҌ;dpv*ʓH&Gq+eQ~8jܻXX7+3+f FYM] (39ֺ&<~5ayH?{Fr r0r߻>lM6@7daT,)R)kzHJ!%)1 ]8鞞Kt^Eru,99G"K阢k0 R b4YA+H] njY<*VVߎ,ޕhIɌ(%MIbŮm)ݼSFzhWZM7Aܜ(aO $etO,AbT'H(z!89g'py:$M\%Trpc`Ubxm}3q3xM6ڬ(ӏ Jӥ7gS\´u>9 Ҷh_wVkQd!4?VA.ф8j}&./i8YӨ2L֣ekI,^rq&1r06A Odt9톑h4M\hL$MSu$[{Rz{Og]݈nfs'i:kfbgz}Ƌ]ګ`{]?dW]J,nt^ZFJǚ_fmӻv? ,R()`œ' E}#ֵ]KPU}ՄW3[;R|2kqr[O}?Cunvv^Y!%v׼u/ў'4/ĈdO+Ph5L#dƣ ՛r%z %EsS"l)g1HLp" �cJL`6R˜,ByqPa> hx|ie/vS¤vp9+u~XMp>/D}~SVŪnнW)0HWTA\:!Dk|̱>쒉! 4H%SEKdk:jTѣ%SuP l %y&9p>GUq`sY@4 -fq0jJ~%S%fVʥ_?c(mf\.CIXF" U +Kp N&x[gTh]`8J%F.pnj2[F*̤e#X7G'h{Ah@) ĝ"%EIcP!e,s5gӸbCQ[Em)TS@hk뽂f . YTFTi ^6i "-:d kbb$ "bc֑@XLpaԗ|/ 0CQ8 s"A8D!rDkHULؤl%c搉e yP0cA8c*ҍVl,*ጏ>J&neHJ\C$ՆNjO8oV'[gUr(.ʸ\pqgv] J 9l[IQDM?B }:#s9;1p/xXmu~Gj5/YG>G_dFz7x?+kJ6BeRee(~uM¸ڳfqvoo_O翞U7m A.IFG{3 5s[V}esP']fY%* ,:c0*BL:=&ԦxvzSnJ}t1k2ͺpO6|z;_}kb4+.,eS{JBg=&"xJZ(Kh1 Q2axaI&G+kml&YtkbE T9 ܪ,MQٔYQ0$hS QREȨ820T`;t~M~;i]mVx~nSxH~3ȝ% @q&X/)mI).)21iIn:3Xz:g@y}(:3Po$m&!$R"Ϭ!娄2AjcA"(j8H}5&wE>  ԣHQmn + ^x[Ѕf> +qO(ID'TN&r&-.x6"!az qc`pwFGίDVX;hPׅ+43!1 i`6yAr >bi3`3҆(RĬ#+@r.f~= ua1e+ s{H8(5w<&M,R*$Az$c)ߢ~1Up_QB;]lws`Ms{9Н0NT7_J)M8ܺ7>5+( Y\Ork,hdqV& zsb|uL˙˵c3?z=iyy 'h'Jq8,}JO;" A &:"6&O.F8triti͟O6y]Nb >&qtu+Lhti[wtv5p-cmûq?x5zCn9[2ess+(.JGItU d)oJFaMo0K+Ql KGJ{_XXW:#~ځlĕgX'TY[pI?{ȍ_.6p >b6sؖ4y,bwKjz%ڒv5]$UzET-pc6t ;NLg,/dR3l얮-ܽruDn$)S6jȀ?>eم5[ kgE[}q+qf=Znhׇ, m0u/lկn]wF?ggM֟zJy+gQNV*+ZYnm~ ۜiI@H|fNreF1fؐP>oSg9_Kc%#:ݐGq>5?LN, V=:K}JO"X`uJO#uA*?ZSiTr}\*RK/%.pI\O~t%BIp0i,52HL-zp۰uN]>Ws[J#u,(nai%#b)J| 6jRḷ{m#I>I*nb]7+]4}i^-r^O|ȩ'ʀ$z#Ii˜1I$dLr&g)~ټyL{~Ma[sSi]%Yo{ẃ1i͋7jz7&=L0h[ʑɠ ' Z 7e);yv>TsD٩(S2**JlYjhĸ48XPSp*k)\"DG#q`5.pzpe-PHq @9] WrT0QkkDNΰÈ7AIK!)R8H3o{IN+y=B;RZ%yL|$_e򨜥Az>J8c ;gy`rJ<~.@_KoU|Vs*:o?L咠aReZn]YMX+f%c;'AIñ}3s9NjI>ҭmYmPflW?H dgr?H4Ax^A\j聨ɯ~t{;f&LP}~72UiFLŮ5hԑPC%rHdG7 >Lv9!U5=W3A\L8g}Y h_;$Ak%#Q1e"vG1_! @f0_jg2gS5g罚gko;&Wq:8yC޴tu3+BTB t' ~\i50e߄fOWy72 r]g@?> gУŶ>'6,ۉg ۭ#'ϻLP뇟@dy9@ 6J&=dїn4ҠNN7CfF^)M JEbx&5W*vZI\k9Govs򥷴oC;/kﶬ#ߣ.NJ| nb.~ݴbp,.*g3qWI_xo]F.`=c]g Di}L({.S,Y$Q,Tk~RQnKN =hP|GU@ u]x5X-PVINCw몝%sDX(*[?EVcOn(*j[9Th䛴K{8ht=z͒ ub1YcBR]坐TI1menҰfUnq!0^]vقLFwumAT ds)eƶQw7S* 6d2O%+Sg|e*3`c\҅4{%q샩7<>b  ~}QREs*Mɽ?gb<#x qi,^h\Ϛ " HhJ`22Y B9; Mww_eGw~asJ:2d3e-Nk9 r"w~d%yKg)b\ uB  =%Kũ.L v*Yz.\h + qdrɸZ2TJ܋7($Tgr>q%G8SI{[WR03%*v{>},`JNDz=k[2L;TTrC7ՔTW@00ɈL.S"2-LaoQ\i [чʖ.qo v2l +M}___ ?GWoFuv+bukMz,9-u*%̢(yi-`^sU5W ;o/m;>W9eg֭ߟek"kDSPNE)$I-I>h!:OR>o|d)Џi\ӗh4OB-xsbg_g G/wg?2d~aW0ߝh,L_-'*~;sK3TN 3Ndr<Pr̎d*I~$dū<@ /j޷ 8*("Jz,Mɢ@%Viv}1W`ɇ =PïC;1Fy4 ri* qMBij: ,ZOd{Nzc0e7SQk,tXs)h-J ד(Fz:͕GSQw/Ҍ+# Un1 WW>?wݥ >r։QCliBpiq`TDA,hCMR(ŁqƁV;D!Ra ʐ ^)FNJ!1wcZ$.JYaOlV/*_ h.Ֆ`>Pa^gBɌcGlA9Ot@:MU.] XSHcImo@Eb OU}#+w"0/q6N&6G&*0A 38+e& cGr}%F^^{åF]V:L.kŬJ8軷S }nޱ~忻sq/o?rkr0GW^޷t4\1c3'~w.rt_jEBDiM,&bbIb.*NoX"?Y?~r*O߆IPϯ[zm~^GoLW9lh_Q#{OvL6`ɀ|v5^pmP-pV/0BV@cрRRQ!LyH5;R_T,E)ޡ0F&ČI&W=Xz1#ňňE~+1ZjOBi9aFSrIZRЁp쬷:1AhIh#D5Ncp@ :gЄC0*|HJ8L訫e9}#nx7R!5r+f[vU~tj{]~ޡy_web>3l얮-ܽfm6:_k6]^Ȁ?>eLZWϭQzs0NmwU%_ͳYu-D vv>yx{M:h[-7C&{}1w0u/rWtq%˘`]G?ggMVw mrSe7c|#޶z@9E6w,WNlTa7_Ec%#:ݐGq>5Y.K# -φb~+#I3Hc!ǨEE#B[;v^GV> 9X8lֱ%bI4r/X aFRp$%˞6!'&}\ńC&$gRCLPQ`Tc qDwmH_o{{@d`~0a Ė<_l%YnrAEVY"Q|p) :6zf^ILW_* ,S{4/JSYuG;ui*48'w|Km5izZ<[p¤QΦ ȊFeLGDǓ.ʘHsJft*fcՎxiO\Ճ/0b@}}}RhAzx͆mdL_2uxq0`4H՗]2׆@+UsFbt!Z%VcE31D>0qGߴul C6T[:u*lW!)JXu^uߖʳ:\AUHvje kg˳]GϮC;8@175DZɥęax" g Ujv-b̖[: idZx+[TϟʮFk zK޿MGRS[~qKbb_e7uZyz8&?l?5X D*&yeHx/tH0Ȑ,{_%O%ClL1ZI8f'F E88]˻$]jvm,Y+s+r`!nq뇳zxp~VfPL돋y1}wN?5 RQWлw!*F+c'ayO D)oh#iTy6c3Mo/[z镟 L.f_&fOiB+tlۥM`8fR14N2Г⦱'Vt 4w#\c7˓)mLaO4e*>\z|g?YpR7 l}lY ~H#uth88W`㔟pz6՚F5͉6L/B~xן~(_ϟξ{/>=8PD0b~}; 뿽]] +ڵԳܯbGniVXrkc/_PN9K[}SWiThMWHlRit &j~|,~XC ff~P@e"niOrWھx t}ZAwZj0gԍ0:Ήw>{:gM ?@٢J6IY ̩Ȭ*nܨ/$mt4z.9a6 B6W T(PAw蝑9>WqL#0Cxy9ykYNr@*mYwNJp Fc#2wd@q\p6[imN<a[n)3E{;z uWFG]Vpu|bՓ=S%WILqvlCI˹}$iϔRJFu]^54AN6GNP)k3ąJIn#ѐ-eZLU>Y|ٓ*%G*$]dIȒ FX'vRvW`1 LYv61mZA-R,W8bSrJʸb&8t(~j/ bi\qq쌹18~vԿ3ƅ xy9P-Pht֊J"Gׁy:JJdE̡)9%q#}jL3zCy 5xsF_^;JԇP/NwVY{j8Vt\8lD|̱DvI:tGj[=lWpٮTL \'gϑgYm]uv"ZFpT6.%2ySdÍҗGn ^3N/|چp< ^+Li"#XRSj2DfY9&FF+-R fu$,xjML̎S9K4d6q-s%1H*aak3cG,e, gfP'`zNۜ`|7SՃ?x#v)R!V2qL)%7pǴ #Af2Ʋt1RR4 g=l mq'-vdu#~sHcskK͈퇃db j6;vEmht1ص">YPW{i fE3(drs*f od!3䒬 !uU6h"ccV`g­k~^ bkcWD"vxHDtܗ8#Mb`4ɗla!3rɋ$VayArƴd'%wY.$ጏ>Xt܈dC Kh C9Zg3"~H=⸚)1u6KvE2.;\\ ܒZ$tuAQDE?,Cg᠀T_[9pqx,x[>;:3hV5OUFnU:߲dF1zy?\ޏk.j跋oW-8=B`╷TS!X%-SQszFq953L7XaAN"g(YzOxg`x1%d3&Mn촾5;_En?  mYmˮ7 H7-6@(즧MЅj< +qO( DN$r4$-"EJ!([5=+22"Xs/qOA8̾iA.:V7 x& S[h]#QܪwN '|tkk,R55 _񁚳6 蜸mXd&vW%ޭyЌ6oli+m8zGǮ8]>_Ug~i9q&pxITGe|)~ 3y B$>hA'^ 5!>r)&#CI-q>6kОo% r}oZ1zEC>&+I˭ƮkuG ֶ ;o8Q*ƏMWJ, ~*PZH-6ژ l "0%>0k| UyOO7>[r[mN($]Z6TsNaRr/ą%}:4r&& : /TDN7 $f; dѻ6O[#Wk1[f٦D:@SW0ƀ01?Mc1h\L:Q{ =dzʇw&ӧ8||J ۈ&]r3N+ ~ߏΠ~R{)Ҽ%LduܑO q=gI9J?Ԑ~(kAO+ItڜW;]L?0tlKRmf1Dz?mS}zLo9.B7{lfGutx82nތ йM<݀qp7E%`{Kvy$thR?h}dB̔-RXr0/3J\eR'? JNPPz䤉pCp Qu(,uZHO0[T&Q8qz0 c݀zrTx)_ozm+!לgdIuR)h,D;f9Ĭ>e+K pq(e1w>=*ZÙE[ՃзBKCRX57<]{~u •QqCʅU :>dN.]n[;w/vȲ:A5lu>Nq3!D+C kK\ tpP ZGd C J+xe 1s&"g"e^Qq8±gEo NklN 7Zyb# ^Ofae,zH0e1ó—닅=Top7ޮHM6KZtD q`=JI<:%UJ\Ɩ{2i5$f Gz,J04j@B-Z/J:FxEhy&ΖI!Hf~3KeV2DV0 [Yfi\@Ü6,VfL!I_PU&>4*CbH f$L[aJ3'SdVXD"fA -)wWbT%wSm{sL`x*ʠ<#my fRqǣf22RQ9֡96Nf 2ƿvJ-18'++tB[dgP_u*v8i f*= P].s NA k,@X9 T @՗"|wLK=ݥF.5^%(<1GOW&Q![U@o*O.ϝGi!:n.cm}\/ppy #v}ί1 WTx.)Პ=KMDrr9\+8 02TlRCWT8UyWuuK `)iŬ J 00K+KYG|UUK{mFOX쌳XKgV,1Ղ&t<Y*BL&%LdRz]Jѓ,>qT L2Gc)6^7ZgKG]S;[z w5B\yuId6j+Z>-:N-֭7{(,nݬztu]߳'؍kEE Յ#1ߐ5לl,gWӳ[ع]ɕ_Qȍv>;ߟFٝ\hfG7woIGü>qz72m[UsScXiۖ/__x qǥ̭́ )Z5\%rvN& B@mdU |Ȯ'D=I?Iw's$uNtc\F$g$ F< '%hW& zTJAo(Enm mi#Wnә{!UD#ӣj$ KQ2IoM-Lr1^̀,j,\q㭫kiv|0S9Mo'zϓrWp'g4}ӛXBpZ]]0U\4@Sڈ1X|2`DbAr=< s'J?w@s'Yڧh49$^Uᛞ{xQΜ4zHlBU)@xBI@Pd[,q\8-DB+H\5q,4*Zlu|6z8?%~Jk8-8yCnp^~$; )&TB~f妷6.4*%H]sWXK*# \Wwofk|sK{i̐)qH%jM,3Ch4~#x+RgSܒ&m4k9\pbW(ezGN@Wgf` W>P?}'?6!]:Һ6|Y댘7e[Zoi4# ^tlG)[lm<7 J+,oW"KM֏:M*${ԿL5֐ֳ&0-7$& em6{%#$5wDMdFMso7;6eGBל5ӿ>l$(8"ݳ], Eq/CZD$J,)[2F- B\U)^uAQO/US;L]U:;T^w43Mb-3DPug&޼CjܨNf4/.hLҤi?"unJ)+a"@hUy)~G v,*UںnR z/~\b ZqDIc^w>n|8HRL)0ǤqAZ6#*8ք|\pv>pڿ!@.D\h*30 RU)7>&ֻ*0 $eH T8VK+*A'ڒ={٥}c{EQ v{#բoW:FMrݗw,8BZKjI8 !jÔ@mErKävƁYc\FbοۼC톦$g Pw.?)bujz{@uy*B'-ӲТ;krgxUqn=M %8[MHF(F®W W\v͚CH:^ӽ'PSULp I*H`4:'\QqW@IYǮǚq/b/6E03x^81tA0BYE(q "!QG۟jkd! cEpi  9!p$F8. i?bsܓgDcjI<->үmL OqO?ٷ&\%v;\E/߿#p&?Ğ !hqUh@{d} sm+qB *0Ep& 9Њs㵻=P^+&EUɃsrYPNjKiVnqܺ.3ިElC}tmB~MګtRxBpN6x7z \5NR :}88Qu: 4y7=N?u8'oNχ*Q<񏦱nݛKd?8:7Ϳ߿9w;sۿ= 8EI L;`;yN9듟|tkF̀+^Wl\M'M mTB/,kGZjU?%yvF0:N/P=6_I8XM< @k`l5~/HS:FhTe47 LrnsJr-"cVz3wT22UԿuduA.A-2ҐW TE=$@!0mB4%[fm崵1ԁN[tZVkeФtR 4WR^RaBe*!\4radԜ/s/ۦ V~btSm߬|135 :y@9&erd)AHpHOELUdTFbqqR1r6w CJ+JHi'I2aau(nOG1S1/V0[7#:ztD uoF}qAD,z$ ZZAU^ K\cI)O#$駂D!~2HhpI 4a, `I!FlR\@FLp& TqKI IR,! HJQ0/ J#g=$/ mixi}/>Iu1 -繠bϠ$yfG3:vzrYO}«WM]~A'.^!^d}+??sSq7r=7;Q)/ݨd2E8RVg-wѣY-3Ã= ЅH37w.mm"{VElsNٰy%fh*0e4:?6ot:Mg;?]/iӉԷ;7lvyw&鸉${|,Isf pտzSG_0O)lOƵR6AI5e%\HʅD?jm{J:NzK^ L 0d8"p)$-$fJC69J*o$Rʢ R =39%4}6ohPzt^6BVy*;]6l뼼f*ŵO+n8k#g 10kCfG9k9ʤ@pek5Ia29,TXj#D (EK&F2r[x#g ^iovR3ދ]_T|s[tB8ž)? Bc4=.]@GBʅ+(LPX"YGϨ})LkG܊ % *5|vJK1h4 dr"c B ,:FvqEbu1Ç:QzMvLP\)Y,pQqI]p!A9% DӀQ)d4!W2)ҔD4P+ 2%͟BzD3.1*'Ӛ_ڻ9jK (l=Șb|m} e_zmzԗ.ȓiRrik+V-%iSn)0K(*[29(#ýW Qe$ti%5*D3[]BH#-n&$N~C\6&T PHY/abgl) 9-, Q4hqKݰfmN[`x;S~Ogaw\T 'AU!gу$N$mM,5&@r\ENgOަNPP:ȚGaTi$Pzgo=7ȬTlx[XU¬SE.m"+T(Z~Te9\*68TlpZCsJ霺Y,{ą5DHBjjyH @MH]<5޵6n$/ ~? 9'l 2Y;觭,9pU,٦$-b~"bK5z{x/h̝ Vj%Cq*9 ol~.ҾG۲f_ӂ|{F};yjuIJS[FLo(Cƃ=k2Y$RL5/▱噿g9@Rq0%sH[]R눕AW>"#C Th(K!zƌƁZ;LLj9+-i}C܇=IKMG0חσl!3]/w%ݢYX;|o꽑Frn-hQ.NeϪA)dˣxyg1yP\}h~zXG;{uG'\.vs%\+N xdXG:EԄ[D}ArX89W皼֫mCǮl|V2󥹙ld?w}۪aB8Iű#XRT15&`@c&,"աP x+-\&K≑ZZe(~kDΓ 1.RVa $Xg՘IDk1D ilf#gCrҁʖ=a`4o >Kp&\;ixTq8 $PNbXXHFzA g)Ju'l C2l56`ЌB%h-JAR>RL@AI5 ^Ȍ fʙrkl׌ll0gl 3B%ijM[r~U8m=nS4p}^YcgRhd 3 j-mj v `e) `_BDNYx\̮hlܱ&=wo>qCF.qVqC3 t*QNBSWNu&p6rʩv 6li{1}; Wbٸd[3E^/ }3Ůlܱ>|CLfށ h&LeG1>AViu$>A9Z=awɿ.3$=VqY2di05e,x\Ih] 6yw }ZI4{r1eLPhͽ́!\F ;)S+I8ľjpUզԥ#=1~[]?.:s}Ŝ)$H'RZl( DT:fRL3q0M M4'F؉FHܰqz%,Q"zI+˹щ Q'OVEgDU+KWbxZwZjzNwEu8>Kõ!%wB\- '++¼X Ʃ4{GÈ G^`\p\GI}Ԏѽɣxs߫VkN ƺiJ ATp)(Փ!anYodk$ 0L|rO1ͳqƚc丹 LrZ  W8eG,N@R J!TU^I}`a]6-ZdvS#ᐌuD=( zƴ 2C'nnz<\('cw_7 "G1M#B(@]LH &Idi4:\ƟƷIM!``kSc4QSi`!zie0Q``.h̸{ZL<5M{l#8`,gQnڨ]\ߺ&W%e&=sg"տ6MHj^ܶar|зHQ:E)6ZL>1*4F~XgaNOaTA->ﰆJk*hրѥON:)~K߄meDEF yre\x0.ހj6#A-O6KǍMw\B4_]USy5r>Pc@E[G=+2$VD5:"Ӆk.idž̴0ާN5sYa>&IbOdAr]=*ʼ#ݛʡLJh@->y(3bڒn~Lr26(v>_Ow0ԥ@X d;6p&N߫Ҵ}y3L'KbL:1 ['Y;:6s=V0O57jVв@]{M|0o&͈INj6Qu_rdnD3ǹټ7V'"r8ɶqo:5G,>.mꝯ% L$yp>ޑ%l=V~ζ *A7x63!=>OUcI]MiB* &,vV* / ^eQz]<+'azsՕlm[ё~y=.P^z_%Q]ԸeOd RLZj&E4!+(}en*W(X rAI^+lWZ¬+Na*6KWyϚ;2,$.C).M/)^+k)L@yM>jb w~Z9;wviM:,2h{\M5V]9&- HGxiQ%^ YRz﵋^Jy;~:xhKvh}΃(W]N'{hz;XV :$"H [Rr"rn)>*_J+",؄UrE0v ǨU1kQRl6GY59Ns>+ W8c+?\І\^P}eliHx^,hn`I2485GuHP95 &]Z$H 1f́I/:|P$A7>P3y3HwD@@it@(fEwZIeH'O AP$ N\y ,EػY[ &o)-5X]1cA0j"fEAAm6?_kkhiE4e8hxBv= D͆V) mջ>AKSpގJfc4DJ5b[#RH*HB<?K!*P)1'QZn8vfbsG1zxߚT@.&OU @4G?*M^g= q`… }^H) Ibd.`2xylVÀPHe\:xyirqs^`ˎNѴEd?0B(E+LD;~fJ N/?* RhaNFwKeUF7q}<]ޛQU'35w)BuW>T7/fT|N̥2^[v!HTƣ_nUo0zjc:Ot6 iiSHgq`Eǣl's.UDum=+hgrtf2Ru|7Pc!>Ÿ>}?*A?rWut~L~??bNӇa 48H$"z/ϘMM`j_?/G2#X|UͷJ(=)~0}7KN~ҁ  \I]|g_A/+O9fGUUT{pR!Bv:`jlˮK?礼tvqQLe?IZkrpWԤ6ߒ%;%=5T읤ц++KYQ GDjxTNX"Ht<(=35BJg;p)ut`$OlU0'g|=x D5CL'>M(NiZ:= YƒoDu7&pWw՟IRf&R _E7!|X/#W\ߏ?2j-䘗QYGOꜻ0(\jZj\"gDwGu"һJҁҖ!T=%ͭy8\{enk5tìy_돪oM0{QE ߙ\&= _*f7tA큟]KR۫\fPn3w go7z\gԸ> 9ȋ &QacD; ۫<$$/e6DGvA@ʠp 0cJ \Vm4^E:j )9!RdYJ_TUƬer)Њʵ!$z$2NtJ+'>E;z=XAd= vρLAzM$ﲍFnzOi'}8v{xet& %D )溟1K7CV7ҭ&J e?5+g)Mc_it+D+λv#`T⎆4xO>j&/1#6~'>ŅSOۜ}eNhM 6^zdygwxD/ْσV4(XlcdAG$Gl?wt7K>Ӂ`K4\ ׼!&BpFr1L;^ql |"USXCF W9mYVi 1HIki/M!M6:,.*жap1d́̐WT#g}|g3|=_/X#UlTQd{Yx&QI))JUҕI\F{*paiAw yB}Xa$S!1G)B ހ.LO04YHev !/Iru d =zE{#2<0c:92Ra:;"IZShyښ< t6@- cn?vI0l8 v=8˴[s!r*i_.yV~V+[Et_oIVԞzM9}~٫w`In)1Da_C\3Ԃq6yQk{Y+Τ:I 6Osmc))Ɍ I]MiMٸ,Ͷ4,b3i#=#޲togΈe3d'[5ڂ66!րF ( w,$M)D*%anSh18-GL^5F39r+y+AU>dFΆi?#ʵ߶`|' @^Y]?i <%M]`y㍾)Į cSrͮz o>z܎ItI6&r%iaڨes!3:'2'b-(]vfri95Y ү^v@;)n>}4l0f8\1ݜ7 ogK|uӌImzEΚnay#dmM"21y :A`o!Ucy&(GoPcv`3q}O'hr=qDޜ=02%\$/E٩3i}r&|rK%V)/oΦ"7*)UEb8eb z~w/(}௉ѷ罟&cAga/kDMl5D6@i jD0"VUTLB9zlB5 M5S ta\iOE]jy%wQ@K1 j)ͩ~<% Y0N "aQ{$s!Y2& 5D3 ;/Q8ΐw 0OzXХ7(W>3g3̹Yn9ԟoۊ q@}|6r9,H$g=cnX2J7}7HFRQGV*f06Xߠ@laXPY]o=}%7[ѷגf+Z8Mj-.sHp*;Ͻq Y!Z$X L@j xIC Jt6Q"j+]VC HD#j3Kna nQ͔O¿pfܿ;Za_7's**upX (Zt>'1DBcVvZA41BkssY#YX$ UV3\{.ɰ CBF5Y/skul s2_TnZSZ2d)`7I.'3ےr>K[V9`9 RU±\v!u;z=W^5 Mm4m_R:]dS:@jiM B$("< o#D23YDq%F;1rW<|OzpZϓ,XBfܻ~,(e[D(`<y YL8i_Hd"5U\iRJUKʑE\)m;\ U(:£*YVҚIrLїcT{Y9 JaU\.RdYrkmF2f(FcLསJ3kT#g i枆>&B5%G#?~@Zxzby>6>:@&B쯹Ȭ/ӛqQ.ֱ?LΨ@&#t7yhX^v&.Ålm9`W蓗ڢd?@ʠiȬ2:e%mBЫQǀ]UŮzɬڲ)x3"μvrnŸWf`k< vOM64}.@KVk\; S5"o\X%MSJc>l!tfо"okr SPԷe@: x53Jf0Qe ғ}s*(X+`ZӔn'm?Bظ=M7$^)|r fC,C(Mdc Ž&2Xڮ9H-#ΉdxΒ*\YKIVg%-}6!yYsEŪOγQC Xc%u7BN:DeS!߭vږ=rp9a۳AaKk3a"1U^4eQ&c4ȁ&dB876$%/ ГhR\ȳʂZߴCdSwϥO3+`OL >Jn"1yz_COjLLn# X^n2ddTYZu· FUFT**cbH`:fKR496q%]&%U[3V#gfgTӅ8cK](xe]h:]W]x!c`L.nCye&O/hkRRYI60иLJ2eJȚ!HerƲgcu܌HP=W6ҁl'ۨ3v hɾR9̂6j^c .ǢqǶZjx,ev\S H{4 .kp@0h*da9=>RiC&ː#yA$M&ȶ LPd8,#X$j W#g>lZQǢǶQtӈ/-:ӊ\5@'3e.!ed) tU551|$Qp"j$HZ$yҀZ81`95EWI/pSlee^*W$"" +6E'Oaj@s@̎Yn>jܱ>|#Lfރ 7>=h=rm4$O9 Md=OWe?<<0ȵxϢG%q)3Sգf jܺ^]s7hҢߣ^,C~6&VW)AtKhKF;ke ,(}ӻbкtl )wE(rn7cfѿ8Mof.w~^v{?obp_0SMq֞; DIc :S̠E<% h+q~ F؋F(SrD ECcY]]U]]A*.DXk !plpZjeXtFt[)8E%|Ƈަ 4yy; \M=$;f0U1m^uJW>!̫0Xk"ɰwD1rD%]%x$NA vp4;S+z:Id!Xl"-R)Fr`2( #)8JHrŵw[֩<ջ'XuhD-K( eZ0`)&gqtʇ`Pz(RD׷kuY٦GCR!(mˮg# ?ɖ6xbC"e|bH&PDD096IRJD q7#1$PjzN7xl0Pе1()42^(H0z4fp naK6729#09<8P(WY/GvїUIh_Sɪ6/ʪYC_CxTnL >a t׊ /FaP->ykXU2ZV .] 1=}[*};qa.nF HnyIT5!ˁ$fpqp!О )ܼuԴk]J$Z> q րjT%3 8]]"K;62x\^g;@Mi`fD#(!E Df6RL)o_@cK2l6Ex΂sZ'v7YeŞ;Hux6P|NkJWS*:[=:=b[6[mߟtU-F z3_;p-q1*cFBС`rtJ${FJT0v;m|3 t\*3cԾhid}l7!avwXa}5sS{.RO.7<&ߝLknFbTcl.m7% sнq9옝D[h;<]/"bBz&NQ X^J`&(Y( l;׊KE>cmZ#BIv£U/8ϊm=cx߭& :$"H [Rr"rn)>*7ilBrE;`BPkL/U1kQR) lFjv\—\~*apaÛ 5f$],dBZ& jͲ,A47@6qfchJD(rńI%`V He9ıs@ r׿r+d8TxG؟FXɜRafXTqXtRV< ˏA^A(Yg9‡f&o),[BBJdC#j"fEAAmu\Gϵ55G@-c`qD8MU.'AQ#H;{$2wpVG[cFA3asB"1-)$e/Pjɏpܓg e:Lÿ~#< 駳3=?ŧOOkT/kUQE_>8!γ`q8b87/O(bC2}S v` 6 5:RL݉Õߟ' NB_ջO%P0 *:mq\ ˥ Ÿ .0o=8\Ia0)b8_&H2gצ"Eb^ORtru}('pm8e=qZAKBC#s|%F'>BOqYɏҘY<1ŸQwŋ77ՃE!2@e+7rn!Q6~@~kPMOڞ_U]72rm7!O0etA~:yd할ͽNYkXA:BHpqȆ1KblX>_ 3׻ 7GޞmͧǻOo~t>{;`&pR$OF'!x?}D׼iT]VG]M~9m߇#Ȭ0f{n Dכ/o%';gqSDvM{IJz3? l~U}3Wx?Z`EQcT%[ ;E6ic]I~3F6ɠ,`'ITyW9FhTkBpҭe;#ӆ zL'x#EgZ EyPN{"gZk$4b &v`RR]rsT#N#} j׫#(z0 :y2ѫ:/i(ki( tAΕ-ur/&&* /3xn2MI1"2ָlR_r'߫]:j줲a#kJtjO2:%4cDhDk̉2Rd@E*Tr`U=)`k U ÆˬW逰 ߜ <`D%KRUe=5t;\O1dgmRʤ*s,ooMcF*Y^ = Eͱӎa Wj#ڿkA #Sœ~JqYSTyvvtyK*ޡpi_33ZIM4eKT+"(\  1C8"jZ e&HO[zg՘H2b=6њϭ5ix2\]Rkdt_\mI>wy6'Zze"+fRmv jnzx=)`V$N4+HnDp˯MR'T@Xؔ5t2-"mv6$:`7MmB^hFxӷSl 62P| *2uirFwFO6.4kv9/8B$)UpfWRL_*@{pQy̭_˜>UX~d37~6e兽"}J3 OCWg?4l垩a4,|Na|Q`&̊,68f:CWJ bxLw }nVME:g9ɳiafvnTȆiٸβHׅAV?\ 5}]3`\ 3 ;T>nй,ˠ\9ɀhhxK.}_ KϙRI1wh+5[JN#$Ras=#fRǛ:A98A72AkW)ƱM‚mfq͊hVyǹTHUG٠CQ{0reXHMN)(Bɓ$܂'~68c.jbc@7=QjH2*) f 9g)UΩekNb0tyS,UFN @6Rj9LJZ7#3 85jLH lh֥@.9[b)"JAo"Xy  ω|u X D2=ubkgD(쀙q),=Xc/u2"R"/:;"2ذKs^o'FYѶfۋ^Tywoqg.0=7Q W ˹`"wadto#B{ 浵y$"kY"MS/辪^<:Ej 9QۋxJ׃'w}R {xlZ@NLP!j9d%7BېDPQv9 K(ϓ,[4Lr!೫|җkr0Ԅ&%oxR'ojL]ԂrV.Ԗ(mt)PcVکTRZ@פUj:Rjsfȵ1O&M􎰮طG%٧^Z:-ӲjYɮSV.W?zaJfiZҴ@;ZZӵO׿lA3 r;WHCsKFDoAs)[OQs ϝ3D<[P@"0zL*>/fm =OE 7{QG)hE*L|B$ݾ.D$lJY)Ga8ҲGKkl8Օa5Lh|!Scr&N6ܧ[+ +#|ΎY"JK'FB&jNqcV޾fXU|aXg8Cɻ ?l4M=,p>sL`gqyWKy_OЋ"/K4LN!-p#bVy[ufMp+}yHG12+IH\[.#ሃ@e(k*Dkm+7Eȗ@ː iv-(Pl`j{cK^&E-R|ıu<33I"F42$i"ElA1p)D9xP.y#.o@^3i^<*ܠI_|j;be'5ٙ?>Þ ?:EJz(@vŕJ$ *NR4y Zplzyq|2 ھS}@ yRdljb[9*Gpw[N;R6̷JU[lPsT{::>;㟦lT+Qyjh!ݻEo4(,s,S]U\iweU.UJ ЯpUDC&T;c;XU[X+4WZC&mw\UqΠ*-m7WP]Fse;X`ev]Uqsj!\U)%+g !stU;O7WUJI_BC$v lw\UqqgZ!oW)ukdtX_jc§3>j[Ԉwi4^}.T:eQ==PQXl׀R&[gysʅڿenks¾,{XeQ #j|4aE`-RyD)>X=_ňz4D*wg ̈́ ù8OW[;ͨ46>=Ƕw2C/u1:HOP=m@o.v<:ɳP$fb4D0O, S2JQPL(B\>Kb5X;OhMvȳ)sML8^M:xYJ2R iftI=Q7\F51$?͵\)!eu%b͵B4} ˤ0RsIme?Aj\l8HF[ K3KezMeۚ-϶ݾfmVdA)QP u37L`NȉKԐjX[:mTZ5D1& H[XeDXT)gХk:Ymfb:^$;ns~}l+аZ_^=|!i5ZܡCF`wyE*C^!ӤeÝ1Ssv⇹q@YFy[l*CjRh@4o"ZUDI7/kZm_^K kҽ읅\D)1V9ȀCvdbl}6oh*v/+PA'P#(lZbHiPHEVyP٣QEPrg<"y-z|xݍ}^CΏY~0Rׯ՝=s8{>fOڵe(}XFd$d- ˓7kAiȉԄ  A(n΋[ŭ)y"njsHވ'|ke#󘹵d'gi)g)E+T B(H,*`L*;$T6I]\AR"ip`'u&j0HBRFolWg!YϿ[Y /~~?k%g\H/=:͡# p6.ˆNH6 Q7EhE!S {.HgDAq|*儒URB'j M\TuBPu m$Zseix3f<T$ux\yBAw/=y!hu}N} Tj#aP&.mA0rJ6KaE,j;(YLT.G$i3!dFEa KYSɤ>a)YqJub3t,gǟBv՛RF6WL8 bS ǂڈTkȨ cRާJk$xfs[uq ؔHiYL&zVD5V'¦Ls*Qgpg⬞wmfM6,_%>ETW yYG/N(/N8q Wl~B%%-c'֏`'=;گ Kݟ^O' wlZi E)b0v $)̚4 #6e{JGJ[DI[MW"XârN%xChԒINgv",L!5<%rE꒤/jO&X{LkV> Gq4jy+$`!$LlH BRT]T~%=#)A8RUMAYK٫(k)Oш m0)8'0vٕ8'j>Xefi::;hiA* 냑M@p5^jd 9L=Ҥ߭ z\LC~t5OVܱ1>SDu'2)Z km#I Oy#Hp$,ɤv=áHJ&cȲ1SUz" $bКW)y}!M"$nh{ח~}iY7Wu,q_{jʦ/Ŏ.>ShZ˪h,/LH3`AX 0p_ e-7˸ `8`(/4rByxDsPI3{&JYkTkRA"Y DZKĜR(tΗz(HX)] "0^LS^I?\>|خ*6uZvPnoV jx2}5GrC# ̭rZ^B3x g&Kx"DŽĎFƷ=k'A#dޓ,9hWVc$\ׇ"=ӯbeh³.CC&PԵfw7H .xa.F5tŸ+Y*%6-JJ& 2ΐ*PY8.im/Ʒ;0t\֮iO D0qs=:O?7ᚦ(9 _mmK%0ܵK%$Vܸ_֯`{mʕ-mK֩~,qIP; =v^u&xŅg/^pER?'q'Zr}1o3 hʚ]}:!UJi]IUvWt9}5X~baëYi$ESy-c;-攋q);˧1˕i&/.~:x/A9sr V(%c )L<@ҳ9hrY9.-06e&I/_Xt6Jv4XLmh>p]UMK4mfц(Rt#[@E|U/je*(z]. Lp`([A'XTIT=S(E_3|Wk7Ý£jkC[ԛOdD iֹ9pQ8) spU0,u(h&*$*,VV#T0Y{?fϭcP |=]KiL]7wS %'Ny ,e61x>i-MfB6*Z:&|r5@U.Fڧ_oiJ@ip=*?.ppK@?`4Mr-٧|ۧ~_Wð}о*:Zk'">\w8~0b]:X]DJs>M$PC&y`* Wq`4b(x3=.-M>)F&eKhe/'/#bK~qڏ\)8?sXjgP`?*qN%a`⋚2$QVg).yDoOԛ`BjHBD'_@[šMevN0|祍 R?Uf/}=s{7N=`_~LYƮhI ScHZ/$II.^/ooOFC7En^?t%̘૒yUv*WNvg^ Υȥ;ri$zRHX2jcpi d w,{3qK$R0xmJ->b:s#wٻD^i/Tg❑ByFi_\Jp{㖱,onمﺻs"ג6͗Zn\6_["\TfkJjںE%igys6bY]drzFkݜh2iXpnqf'9p Q9|\Ϛn{$"mrr(^UZsM'2%Z`tY.sY#ۆ~\o=֩螎GV39gFi$g~ǘ|khR1Ai>G+O^URJ_.檤aq9}k9^z'ݏ?7T_v~sͶGA3[(V^_] eIr2L U3b>`k!N=H3G c9[$WP -glzwXS?uEÎqGf2EUBx; b^^i색F?z/KnDMevM0uLdqY6e_z[ƤRW\mEnP˺xcŘ-fl+PO[%g"HGS>] ]2rx>zbzY[UmC9u9F;WƜU+V%^'4Bwnk^]W}+ &J* XhqN`Lak4>3gY)Kk<' tQeA{o5K2 tAt!i],#zʐ b,p^_f2w9$% J<3UxM*[m{y6 liY 7<.EScGsSpJ)f׫߻T1k $@edp%2Q%ywESНh: ekY!  iDT3 $'-s$-vZI4-jL/{NQWSt̋S[NsGx)~::KntWmЙ;u"QUER\ VkQ`z׮ȵŃe{o{6߽Uz^|*np@5)H !XTYb"Ι,˸Bi2> W7YWݜ־'V! Z,Ũ-v^3=f p{t^x)%To)DΘ aqW?R Ѳ頥%( !re)C@ڰL1YBAkuzbr"^ڦluYJ#6fΎ.i=8>| n=:::]ܖ35KWp=ċ Q XU cd;/*Wy=$cA"}lH<>yvAf['`2rL [fRW&wT $RA!x5yEI5kB}ܯk'BA7 : =D*?]D{ri@3(mcAfT A}AS=BFPpJX#π faV'}Zs\D+w Yq_7D<v=Fm -BŴV.C=O ioRlhIAD&>޳+yF>8n6.7FMbL IYS=$IQMǐmӪ鮪 \$@L9;EmV)drƁEQB(D.D+=(EKj͘2U sp1tvsF=+脑W׹v[6Ŭm]YdY*mcMͫw1A,dR]Iauh#C]+(L22 D Pc~&F愲I,_XsA/!wjgs5FٔNKEQ#Ĩ$j8C-Ed<8gDB:Qv/$9XvLVGAUdRWZݟn䆓6& AZp,MG\X- 6y%'OHS6\JmYƁ%Z2K 9 D3.1*'Ӛ_:\:5^!TKr!>`M_Nɦh'q7uHK~ᖐ&Jh}}8&u[  yRv&snɃ\.OJDn*#óW咦F.L*I NVQhDC8 *S91!qZ~tHb0) h6QŁR) *-[2Ubq,䴰,4,<,\`mz52%; vonM6/l@~M$&ERR jh8Qi%OE$4ƒb`Ce O!{c( Plr=NJlNhc+|LkS:έ:% Êy*Ru*YYvU_ hh[b:'L<$ 0ʄ <811BB" ^lL3Kr>Fs}(R=90Am@4+~3=y}ۓ!he{v`=¸r\{a4FɄ J'u(!rCƝ`+rLqF4S΁4NJWQ"#M<͖gjk@$[XoNl;-1ĹpnV2VYv`>-Ӫܺ.~aD0ZZ3hJL!E!5]D%Bf)DQO3OC'zZEO'y:CS|2Q*.L(OB(-SDkՌ0tk s""hEp\8{g% \h)rVЀļkUV#Dv\_뗎[u]h}CVv!,ewāyd!} z T_-Sƣ֦Y_IH W9p31 &':1?(>wM;V`һaN1z@G5C{9/ZP ev`Цu P ,n7Hd-pi\F^WWZ>n'5#آg,`6o#|蒲}=)oVˆui8?jhhP.CmkCs:l6'tɂKuu-bZ~:dD%wx?auZKZkT HZ̳[ 0ܡ *̓^Vaڃ~e v!>'rm7p}q|N jF|9!5}ACGԣdkf==E|-F gsk&?FA 9JhžtSz]U8=hDʇ O^VqGRԟԕyx225tq T0,SdcS4xAN!#*8tY998Ӝ?ۦv&+0XË0s\X͊Ԁz2^VP LecUvI0S #{/bSu8 ujg6V9b֮Ms:q ZkIMB;)g!$C ~ga K!-TM\&Fؑ2FrKu^80`2s}<Z74KBBvZ\9/1a8 jxw})Mb1> j˶A;<| fG49x)c9hB4B6 vLȁ\q!'C'_uIH1LGEmULȐy  Bzk4:'dS1vrI \~ *i&̺;c.Fqԍ&. m>QADR&s#!wT˄(M$ 9!FKOYńZ,# ; e&u% J)+%csKND ц Yi?gEsFWc8ϭt37 F7F /q/?oM.[]* ~or<͛ޯ?Sp'?^zhDՅDкm=wȐ̾I_..AC ;C]ܒ+8_dob2Exsj7DHU[Gs[7/nFR\XeWuWN0>,3tCj!RO Z ys9\bc8C{)ײV_ )˝9q{@?'1?ީݛ .Fv2{cݪbM#agkrmWDhݴC[, $nI3ݴMڧ%nfX#ӆ8 Wp a4<zӟn{9Zgedl]Q:NBHCh3IR/KzA{o? i_3`qoo??Rf>˻|'Οppцf/F!` x2eq܈=EߺYJKr˼OGYmȭ~; O|~s7?ujD,濂h#Z'=߀x[cō^ 1CYTB%<ٚӻ-bvT#i=14,qDMAs,5xI PHotT0yU>aot#AyiH+g ڐTEMVFm`Z46GCwSRFA #}J&,uΉ`,n[#tdڠWuK$%x|SB}ǘ)&ѪW(vJ5ù**IƠ- Č [{w ʾxvN؜H,k9 D1xႱFB& ڠ < V:Q+gܳC۴ҋ\oR)Uw,[ B-] w)˧=km[m\Z>>۶~7~-K}E^OXΞ](g ZQfբ:m8F"JJ*Lce6]o?1/,E%1UV% *WE -'sNY:R_y=dŎՇ\ u%3h,Ijc$1AM(  <%uDQ" 1Pcp TCOB:Q Kcۭ:{\jphtPk-"J0~awqYɔ|Z `=:ړuĝB5̬J6)?ԟܸ;kS>Br+j "'3@4OY^jt^2f /ݯfj>smWn['KC^ƣx.0wϺM:<呋i_>t;"Qp-o+'OoZnmJd|2j^|^([0CYE򕠚V'SKxE&pl Ye%.[{#zu>dҊ:X4/g4QʶWy8m9QJR)r%Y-F mlϣn?0(}N'cg޵u#ٿu;E 0`csFly$;d1}[R[WkbXv?t%NA%1XWa}"-Q$V%ў[Z)zL_H|$ibjTTjW&Ja8fKQNW& ðu^-C^_lR;t7{`(B^{7ؤ(S MWj[~+dG8I1YJ֫4yTd1RC( )v=cgumb[t5y>t=/ s:u}fQ* ^|b=?E??541^__O8>c?Y}\˷o~xZ,ڋUCt5]?b\AT3(|5V 383o5[h=fb /Vㅞ\7]#t'?^ nfyy]-E^qÆh9z׫娋/?8ߎ_\ࢣ.]|X|t|MGYE~Zoh>{=^=|@4.~C6㾍F9ſAlpO`Տ_Ks<0ɲ Ƙ2Җ8%y84vװ1} o߸9솝Bsl'Q>Mah_Li})|uHs8<o/=Y2E x J5:Z NW;v="Jսei~p+j]SOCWC~߃xC^Y"u0tju(trw(t* W3[ׇ?/> P٣[V.f]w^j^P7YOG Z,^]hud RZYշ|ߚ| > pTekPh4QۜݠBF+:\vBW3NWt,ppWeϧ'+In_%z&'2jecYftfDmۯe|TZwC y8{Y }Λu6FC籛?yCrHE@dUdU5{_t}D,&Kƺ&9.Id%&AIIu*12Xb6 3imImcR֭ENJ3g>b$늳G@% o-C@4Q:ܻmHGDqBjM :HGRj*0 ao)Wt4FXZ\dgqV25zxxO*` b:F$Px~ps;Y e:$DyK8d[ ~:'evZΛTUY9%XQ9H%9ަff-H&G@1E4n ;Qj8$G1Ƒ֑5~D[ko":eShJK.Ȝ`8|j'X'e5/fLE2ڐ55\ }b3#7FL.9("$XtAڲTH!ّ*͕Rj SYi2](!m`0`%#QV4TTPtB[q9mxAYŚ5 <(GNTbUl7%.A2&OF%"1`sAW\hZhFfűnJʞeS}`T 8'#qHJƬE6,D^Pmx_uF+ ٻZ8ZpW&uS`{A96x X&ŐWHU ƊUg c< l3ʄdk}_œ`+{'Xe<7*zSC FM0r3._s* e2a̷A( ՜` eP@dBEKr X2K֔UB>7PϴIBe#vf!*?%9 0+COpPgžuU"M* 9okL&4e_ J-7Q9AQ"Ł.0̑L0zY[Ɂ%d~CBYJA~ ) R+#?I]b =( EXhp-!T} ԯteHH/'ԠDJ(! e1 =ҰWmFՊ|:omBi!:;M-apqNm^bB\Rg$'+ܥR( /HLE zq_ܼЌ-#߉iyUw ڦHUU#x:p4c P4d鳒ћAhKрrUe =)(yE n8W:qa-<%EHiD(2 #QIs4XvyyT/> ͗td*nM ě!q2pt/UUBNU~d}My*ƝPeۊj̨jf !v "}Ʒ9r=( /fsP6D"E݅ZB [2MH H~3E `ZY$Ou1edr$ڱ ,< WRgɮ$)E2f ,V5 A\K,yNH,"ĩϱ7 F#tf% YQ>QlCV@;nCYeI51!TU)تȡR+Y<~B `U!eќkHU#@ r[5Ai*R) U^ "zŝ`!-#ZVHB&(` H5oT\!)HOm-Ѧi`%nozL{~|^rIV084qt&Df=588Mӿ)'AE㮥YcYkA#󨐲ZthhQA)ID o5̘c7 (uP"Xdd*f* et 4DWBz `U߼Yf+OgYO7`E^H"Vq*PbUCU;Y1ƷY'yˈn%ažcO(eQFp,z127g10d:9 T,K?mJ6t.~ TI19dD<HA5kփ*UQi J>{kfR!&C@ IBlO֢~]nYksP}}5oRD@\BmEw2hU|Tڠ@4XpaV YQWEzʨs!Z=?MД nFP5>dNP֞f JOQ!,2T*'Yn5 7EAaĔ *5^M!b!}N1I)g|I8M](I WrhY է\F9/Wk?]1w\u`DfW_};ZNO?nm q*,pIeP"(z翡8}Oۛ#xUβ`*K1.=r5 _Gi](rWk߹;=w±k > 'vM:ʱtncv8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8uq8$'!u87wb/Nvo(p=G'PNNNNNNNNNNNNNNNNNNNNNNNo sHN FM\ d@fZx۸_hH|?f6@-.ЍãZz8.93<%[j1mzG9~$dA tJ iUA @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @Rp@ o@V_ <(Q d<(((((((((((((((((((((((((((((((((((((((((((((p@Ke\ZItĥXjyk.-xʝ9uQ6g@,3Id,S^( ¥C.kz\DB}YBNĶ%#4D{DWs@$DBW&ҕ!sA,?X \|+Dc PJB] ]AtkE+/th i=]!J]}5t%lz!ieNW6|S{j F(-Q+]@W6= ]`7tp--gt(/: R6bbә̀KuE3SQ~6ۋL5&u :CG /pR?CmqfȘҘZ|7ȅ宴#f< M#\#|i@ [iQJhi34;+l ./thi;]!J+.%SNʍ$UGueC\R]!2@EK+QD }utb:^_ɨ<*Uvu4)zO>/<C e.{c"rPlPʥ~)f7(]]>)3 xh|޿fF3[A>3lcxwo^76TJXcܹ-θNAR9͹}/\4͵Ys,Rę1,ޛ"ՌN^ÁR\7Uld*D iU݉nn ](b2`(J,lo)*G ڟ3#h%'m]!v9ErJpn+/th}d&wGtp UTecWá+]@h 7:T%X@vt2u~=w8^9XKx,W=Ɵo?ߧ`G>vePcu{ Wͳ|:zlw,0~%ok:y<IMfE\6{Yg )䷼R:H/Hr sxH$y~vTYˎLJoW+~hK'ǰڰ -7OJYf5 yUInDXk*P36r_PNfF?w}fn_vf7]g͝=rW o&RmwaU;@WFZs?+-t(yX"yte]`]Y-#%ׁRk6If7 (̸6+]@W6=xRGtm+DiMC+dcAd 95Dk(U+e뗛IO4aF4( 4}4͹h/$8:,>(n,/źV  }mAbyĪ&y3bmRUvb FUl7j2ZCmGUDIUpTJYBW&m+@)HZ>D`{-p`)$խ+DٶeRZ<+l7tp-&Dܴ !DVRS +]!\ ]i\[P20'BLBWֶ~P ] ]YѕAz3whPm++fLKz7,پj3b{mVIһJE[FWzҁmzj#BKo {JjNW@:LR u<ՑpQHBJf M`'y+fߟ6Bkl;MsB(с9M{sFn5%4U Tc3 @Z9VndЎ"ԛ61S*)DV,wj3=A{wj՝mw[5p_O* \,*%GۄR[CYt<+|| |˛k5Žߕ(6%6RR(eS Ν5)Ig4.03%}$:Rpr?zz,_YM ,aO P+]5RAؒyy=u1^YRВ.gSes2D - $HZ5adKgdᶧMO>fB?/G Q$žr#!(3MvAt dͿ;P'ZȆ&Y\v:ApQX9ǘVfLJáNCY__=}{h+?OkAwTCvN١SN9 -w4|khak ZIܐM>C <-T=)OV?+]LrƉq- d![e`AU EQpK#'s022疌*+um֭RACo%F3=<~L2lNəƛz%v*c7{!cp|j^g b*DjTFF!G/7y)*}.>JdM%{u5_H ڼ ˨>k>hr5#ɸ![[,fAAjPv96]21A :V u鸥[O8 R:(6-8xnsYVDО՞WnFʹC/ԡ bQWO?vRKۦZusl1f:6U=.zQPe(a8!X{m¨^^1ߌfϯ_q ^SF<]?#QeH)=Y4cnJE1T1&_S _!^O _!+ ̟WC1O*exvZt9:͚Z9[Sj I)Ԑg9k I׀!"!XnZ3$EpB$ܴ"H88J^Ơ$FOzъ(ЖrhH:[qtT"Wuؔ0%#DL@W*-1:̑c$:TvT#gO7V>rpx X]fn^)^dvm&!C:CY!̣7>j``tI9p;N=2W1D|\H:5/ń"!MBb@qu2k[PӇiu1.|rpu{*|ǧ^&rzօ)h@кGO[?hk>ewmL̖2%$**>ΐ2b R{M tRf2Y;6F)ўi a 4&1K^rERH1EY%k@62V#g72Uaa58 %0`QltY\lvWܙBy\_8b"* 44i-x)cF -S #Af2Ʋb[h%z{Ah@) ;EF6meW&YP;z9Oy*];EmQP{`6dBct^[-,`B`EeLY؀^ݫab" 4CȊ2YšM!1" }:,֎FnҐ<[4b4E)Mw-5kuEϚND p{axR/Zޓ,9X7BL YIh դޱ>}=2QCq(.նni`YGYI_EAOA$|L/Ʊ<7 L&Qk.IkYeB̴1@pU"@.YeF[Ϸ`|?({Z M{)Fg8_%Is'OmŇ~JuCG%㕠iOv[L>tJ7%WMI>°>/W'ͫW;4nKvU&vmn M xw}~ĵEf7̮mk7ُ_6KόFg4=3:~{|2]TtƸ"ߟOM [m[2~/bz^QʋJ?zxinCM|Vݴ!k垉N,i纉JDCZ;H.mG8aeG\>!TZ쾀NcB{oT̷#ڟ#|<$[Nv,[Q~1 iÛEi*ө_b}ҹU{o9bO8{хȃf.sQW2mNP cdNx'3OteJb&{G+L. |I^9;8?Tڦ&˘0]uf-&|yhTF Dí 7.dl0J 54T8RIG\$q:!|womwΆAP8KNq Ebڂ@5$bXvgYi7a4Hzu^9"g942ZR9{{xWm)&8Y (PrN$KM!˻bNY]T^w7><6d)F!lɲZ+3DZzţR^e\O!m-Gt%ۃ # x)u21"zy rJbrIZ&}2:agsW@˜\vevU&enCf }*|2]r,hfBbH2a6yAr >bdi3`3҆(RĬ#)@Er2F3J)e+1e+ s*HqQjx4LY&V*L c'$cJlio3Ep'pg ܁/S|)?i}xC8Kc FTP|zu+8]d^{dD$Z5@NN(C1y`Gg/x)cۀivN!N(8Fp%ƑgU~ ޑ T`#e$Ѓe_j%i7t9i}(v}[@it1)G7gi<Ӹ?F{VM}w}{wf>Zhߔgg;_yLěwC=P/GڰnJT*R9S$EEF Ez_Ɏ  :i|MqiM4g|i)I!G% ǟJ}izR=FRTF_SҁI.Z DY[Ƨ@Y]sUVxBzMJB=ωvO_*IB XC{wrk;/m-6thw\ߎ*-߷{h1-m /jyuhrFߙ_Odz~K7}E:2ވƮ{&R߽7n HN>.OG^okVe]lZ,"K@H}ŞWyI-MXfX,~yAz1C75vQIbwEN_h76q9t+%~h\HlPH)e#i<F. H &i zkԁmDJI C,.xP!"e:QټF:S5uEfҾʦ}o{mYg|잪Uw~Nh|]1nۿL/) -żtًKk>ʛVyռ.\Ge͕a2oyw}2a˂k{螊4¬K+\6Y%[MTS ۳::W\ "'XEIz})!slU Hl n$ j ml۽ Ů[JUIomiH2jF\>y|X#IRpe -/gP0 ClD٪du{bs^%H$FYǃ6'TeQ8@zSpjU2D4iUMۈ8; U2.S+7ߓypׅoC6e7k_O蓠ƪ@4zĉ 3 As%e? ZB{khMޒZ25׸b7̛'ξ9};vLlMo 6`ʘ@`ȥ-ep>A< ,Δyp/hxi\4( 5$=DꨜAyg$ ԉI" $P>\TPj7Қ| TzDP:j=8O22fE68$k7pvԉ*ٌrdO7x.ࡎ;aSkU%WO>Fh6ܴfU+i[6Qe7}UFiL 7Q)Zi'6ZCWk@}th62Jqpi ]eJ72J::GCXD_ToT6\CW+eƼ"0LH>޿?~ߛf;g,(>#{D]Qsͅ! 5t4ѤMJexkX:5-,h Mg錒ӎϐ9kݬ?(iޛapBqY$?<ѳ7EyI+kmiG-/e10(Ŧ\f32o`{ӛxZBY "q!84_~Cu^"T<:;OG(#Kż # p޿Dz%):}B/+9)'EQVJB Zkkq Eiם3᪗eHKHE*'P8I2s:.0vA.iCQyvATo>BUxFE*ҕ46*m]etQV]])#U;뢫 t[JEhΐEt)ovsftQnM80'\2\BWn|0gFΒԢEtlV WfejJNW%|3t%a]/%Ց AǡDDIDOj^gu=^QVUcW^IziNG2GNiM#`&54jF@i:d3iN4eEtB2ZNNWee٦3+.ѦW=9Qc%W WM')!s aA`C*~6ģEG@WͪsY5dF@Fٰݬ6B"BJUD42J;:CRTk[DWiz*=сGJ3M XhE(5-thk[/a:z2Lp&BEthWV|#QVuvtu>t u6XEfJ2ZȌo]O/'B|q<8.?#ѪCiH+s2{s3)oFq_.wFaŦכwҧGŽ9;N.7Ydy>z-" =nxZN%@GH2Ze(mhx7yS̼2i{2 R^n^e:G՛v毗/z?߽{=9~~U^`IVP|!d`HeyV2 _G"J؟?~սx)aV;i!-wSޗ& R"ˑ8(^[I`%hSlPVP(I!Fc?#u &JkO ' E眽"~p7)}B:&Kl,.M紂3ay,E)O*eח /,q5~ÿ n򻫲v/jg׫z3ǖ#U+7>v;y U-*5VhQ8IuJXK,J7pgTRr'hə):Vr8&_ yaAk2f h @ygX٢4i3)œE4E\鈧Mh'Ra5yTpJ)l+%nP4Rb$GbqBsd94g^".bbIw5 U)EKԠ1C|15N8:`PɣLj:pIEZPՍI8ic~cmnA]_~].2ٙ}AתTguQZ./. ޲o2B|yj ;f~(܁y7p[ !W6b3;'~YbG}{eʵ,]fiVٙ ^4~d+ pӧj晫GIz9*ah?Y²YP>؄g[ߩbJ?_ L3Q;G)x۝2{'Oz vG:tiޱ~|±&UCvݻBzR_Ip:>ŖN,h]w ᅈvZn u`@$]Ԃ߼T6 <}f8zVe_rnJci_GLUm֘Ʋ 㽝\z9~g$oer7lQ_`)b`}8xMt=?~O[R[k+(gڢL=1ASiRHBRlKZj<|<Ԡ,ŜxWDMH!l6qԸ@:euֈl1[# Z#5lA68ԕgh`E@a:I$b8YoM:E&H)bcpo*D,@'S69_NiT͋L9Q792z#n8|*5-apR.+nמuoi:E/X֙U4,|xգOw0ȝЦ#Ke1Y]uL/)xOV1Xesb^zw9=l*5 d%^u/ARry|X$)x0DBD Ǚ-%L"y,^|9/z5o+A"5:EAZs $N VZ%O$IJV~R-W>#HSa@!h͙Brr4w"jGsֳ7ˑ'9P曰﷨"qT{&9*ኜR7GoZ_ϵ髇ʐ㜎> j $:KIH0Q&yRb@rp ޅڤ+ضhMi@^>-"{L̎ j2^nhO距]wTR}SR&t6A&{j0eL J0E2Q8Ejxgʼ+4|4n X|՞S"uTN۠ Xą N$hD}X!5IZyAx#q4R੉u Z .|{pe*,"d$S͊bmp+ ^n̽R'gsUz{"Dkƶŗ?X~06ZUb:}D*WRBFeIpmIh5'(EFhNҨ\nQS(!gװ)"XZ&aʉS$uÓE٢4}H,?A V*tJ] "#s t pKb.CTrrP# s6rZV|pKgښȍ_aKNNpoUx٭I˺TJ)Rd+>"% IQDm3@w:jg9m<8qQ xq 7|,}KDU# ? %*Eу$ 8nqDPQ+<&FmsW˩Up1>Q@;ZxYAs a9]|llWX8 `LBR"_ 94N/gs)TJX*cCrpM3!:0do^zDNٮx{._f[Z/dii ٛ[OFqne?fŕN4iN X]2EU"US9Ld,QVWeugWopklD/I=>X JiЮ1ȝ "D锨- Jrc4z[u3ktQW+?1eQ 3?VyN۲iߛz/d)+nq= wD\w q*@o+1 yIY肱*GzeL̜1̠9/J.=,38V LjW?(q}B+BڸJp!*ͽ+H 7@cr8l\N݉%b2#U8uZY(ᣲNp\z 2f^B&R&pT(fVHu 6S-FΖrϖOw2?/> q>=r1_Ճ&饸x>:k845l:eD$A#KS#<()IK42Ly{غ"u -+{hweYz[$灲;Pux>,x+]_wzl9Lv٘ݐթJDDAA%%TZa,AnLT>B; VG~QmШZ/m#ZtX` ˹נ=lR lt_[eٻ#@yԝϯǺ|JC'Dܵ (c :DS.Ȩ HM(:JLH95fLRtI;\b|S.(Z%P2J/+="긮V|\ɍBcD19cť2h$4AjF;-4s$'(C┡>BHƠSm[J TiX͚RLu!uu^uJEU;Nfg,Y~:'BeGYc "))-4j9Qi%O@UM%Ŭ ʆiNCvl`;P ,w?\%b:xO_5!0&r >t8147X`ݸ\jIhe`y0W[q)zBxQM//FiPɸty4*ievk~` LiRtlGQjEx| Ԗ1Jk?LIsUbUPA"*DÄOZd _,6er#RБWsmFCF~[8f{,l=ch@^V9 QEd7!k㎆ Icچr6] iA%Rl*t`C<}۬B|q:`"+উZL: f}WnwdM%mJw$gjҀz{L7iOty?+F gsgӥ~L.JtP@]n [CP]pLLFٞIh5rkzCos^a}q$\ j;a;m[̶J=c=CPɎެhط6MVyiD0F)-PpP#VS-"<̱z}XCrPK߀9"M(BE1t-ڤb}IaL:)FTSF](́>Vu^] T1ص:Z"eƒUiM0@J9 I95$$~۵. iN׫L~[A:u[.[|odX͙\r_eB"~ϣ2%Ct&g~No.RŪ+ZQ!5ަA9Xn:&ħGkI-J%@+tT(\q!' 2[1vAGWu.gƗ.tr(P ş'T!ŠhbqAA,9aqdžqO8>3Зmq\tA0BY] q1'; "M.9{=?ִ>f0*JID/y7Z&FC5wEńV TC=\C"Zad,qnH.SID\0\RE& !\͏9e~3k.P,|&F qΣyo2@q|hR|p%?A{s/#!=.w5'`V$CSg>uGT#dnC$; G8ٺntsTOQőg" 䍣3QNŋK)a r П4&)MBUs漫Y ??śmHFljkЛ7*ƛQ܆3-r%hyhI Rޮe \d5ً 8XS`z>Z#fO~j[6傘se8NlnۥÑx2jڜ:C[L(t$)t$=wuú%o>2|@ q 'Uk5cU66*U&PIGDj a?} vI5 $qe#=C'jq*(YTÃ(}$} (4LFU;}+zAڰ͹"*w1ܵ"1)2hO)*8״=lIca_!ڄ$ vcͺb ]eQbۨN2RVYk(m6g*O?yY{z74W]>[8uj(t銬WDWXpU0jh=:]5 0({\z}O l_ @W0ծM+A 8jp ]5NW ^]1;{b92qՕwř-bns.. :j)3%&_OfY4wt'Dy(V+OCiL\lh=h?JR#M@67fe?n&e&71 l˲1E>/tW&|WV.^8̐ЏM@ iXe9US-rNZfjr+j=W:l^sU1&vur@1 xD\^UĨzʨɏi7/pTE?q0tg%wt%(AtʒRvHsX ]5EW -#]Lr$~J \;g|tP2tʓ7~HsW"8 -өӕz\tEԀJfWW^"]RnzHt%QЕ Exw6yQ+OG_|\k;D~]= 7' 'Еj禗^S⶝wFC'Afc?7(o 0 ~\RCiAk:M7G~4m+ **9V4#]H2yɇ {~W9}L_= BieQxQllD -ws̊>~tT:k:DL]Ϯ.;A?k< zxϑygLgC淏nAiR7 \s1F u6V'\\\>p^]ncޱmB_MQıf]:\ 6Zxu-KQ *km Ρ$RmR%T=W5P)+(g q!f1j6 ۜofW }nZϿ=􆾜J 7gsVcޡzZ0a@q%:%OV'b !iXk 6hVbٵmC>H1P֎9P *lS":VJA{`3A&zlu)ٱ͎gu0}wl=c˯w[zV˳G"]-N/Nz}}ForOp*+F8%*!mO֦k}<ry׵ACF,]t!+GUbطԩ\٠75i B*b+S.n\b$5D]9z(.ZC"Kˬa&.;}@6.Orp=~yZ<'tb|~3m,fNJFt4+gTC U pξ87254g/8A@h8D;9') Ȣ\M8lg8\SazǮ z#k vCY\XlUɳPU >r^ZCd5,Px/\J Pr"sYQ5lÅS?Xf\1 #?veD32ȈdDֱjމfD3 BMUhJB A8C^I+u4EQ aqio $47kL-@όHSGy2S쭗ʋg^đG^|;d:%ʏ6)R 5XwdkPȋSzǮ|ڏ1yvk?ϯˇu6pC\Q,яG~r[ `f0b8d'm(ݘ,%&E}kO?^vGCog> I"YߡO ڄ8,9v޺1Fn=ېWqorD+kx7gӋ2fd}5?mz >>f!cMl^q*D 6'IVJ1WCQ9y/u&}lr^ JCwXy0:af}۝~?_l^'ۜگS2.?xzҗ][>`iz ' N2R, O,Fh}yՍGbĐ Πʐu5{p1KZkr+eڊrhP˞)`MJ c5.匾TuŌ${VӀ S@ WH",Bld@!nMq.J>m7Ջv94kߌHv5G0Sfm' ^̮RC#C|cߪਪipZ>erdЦU$CP$fQ^N7%Qty]qTF0* Ux8pwkY44I\aS?z~~5t}=W޶1j-[/ƥbUƋ'we=Y*γ7ޯOIMëf.Ւ<)[ foLUr*4_D:4Mw\ms7֊Y05aV<ל oE1*r4 QmW$vu,6P?N5rMYVi5G,X+KJh88>mqǴVv+~4]j@t;xJ„L0k$u)5B cRQ0aR)P+L7V6]O+)$Z[Iޮ^{X3Rއ>y`禆LTe.&7䴌 RLk&E4!+(=r8-գQ_vy+fQ _N0u[r˅q4oA r3Zv 7}v_F3Үu쵲2aDD&p̣jnDg o3ʷ7$|9w|S?$ۦ&0B"Z7au_7/=ր- >gԫ\!KrvKi#o6`N'F:jwxA4ۚm=lM54ADZ)x1RJUZ-%G"2Mh7A3QhLJI PJ;f2Rʃ!E͂Vȷ5vig:w8A40ąs~;/qsjxaZV(k8t$ǂchK ܑ+&L**H@*(c!:_ZATv؆F3O9#)`F;̰4N+ !ʓ$N sbAFLMPO-,Kab`zFR8X+D6d묦.bX䜎Fk]fOF_hiE4ksdV!FkfCFԮ T*$̎Jfc4DJ5bp[#RH*_$hJ||ې\3 +Q)Y0Pj?p0cο5=mt%\L7RY/ E*?= '#UV™ }g`A<|Qh|>Ӕ`@r 8$9B'Swbpiw'l'aY(*:mqU ˥ 9m>F|20Bw"eWDŴW+vG9E() ۫t:Z~5BN8/fb>_9)>|X{&ϳwZgeZ.'og.+fKe0OR%H6~^緰; 8=1˦nHs7%xsc7ˋ=q?Gǣ]=El1+A{_צwݙ,tꡖ;%ӐcS|qusDBTWIl0*gnp _`;~ÿ݇7?o?N̂CᤉIb@G0p]󶺆ꮩb{t[}jwˑl/>ri͏B(=~fL] n;?5"$+m#+6eY c0\E(*X["bsmJ/跆eE˺̪6 FZ@IdPq0K+ ~EMʾ-YPXYbSSC¬ԺBzvPl:EgZ EyPN{"gZk$4b &v`QRr)Dkk D6qszzqB1pgSy1mb[=)~`>`nC(Ja`ԋfdlr5mx y$"kY%+h .E:xp tHHC)^9DlJa8J#68R$#Z{Lb Z>8¸IkY8H&b^>| lM YOa͠]: oa6Nl0>]?՛.FQxC4WM)ÖAslTrT J֣WF3-權_]|޶,RTLyB&R Z0^Y:R[񲷬seoyw2ނVRMYR0ՊH8!#W@{̐5ΨH9ڰVn2`KD!e!x X3RZFL&Z ih;{+Rc4bÅH@Lr7۝2g!=HOX\,*h&U:fwNIf*!ݙqɭwIWjzl=f1-/dhKsln{H][S^j}=L-ꧻ|.rZQu.C+"+s.Ҭ_nssn}o~qA?KKe˭͟6Tm^eVRO$ ߃Oe XLs«ׯJrjUL_G|+i1ⷔzǮp|qqR/% 33(pVT_Yav3wb}uVůVYAq]$S]}S}lVMW涥w9a[]g 8wPv:˪J'ŷ_$~V@^ꉪfWcz2tY|\p,¶Absjד9W WçH]TaWO>o)#:NHp9E{"1;SYXu7#tG4 aZ:uqlS6^ Ff\W88i ndaRiㅑJ:8Q!(R)&8IOp˾_\]qQHV0cp9#NYDpN-^sJ_m/%}K"^X` x18+-r8H,DZHN& 4u H h֥|B.!,Ȕ^E"JtCo"Xy (or$ /@DYf[lM= kNcPuH&({@^ĞЌ ^<㒪'kJfI@ ֕R3EwS!8a}ňq!Ϳ|W;ۗ]X tq3s,D HiZ+rfm9o `hn)1H)h{e1RnQ~a E+a,TUT` ڨ94zNņ(r^j'N7Oahr])s9)o,_h|{kkq\VVF~9,JEO"L:Z@KV޺$pr6roWMEo;-vż٠7R FZo-3/tNIsLN!-2#bVy[ufVM:LuG;&crJi/(QT˞{ʧ,+$/u -8R==P|7QO3NJ)bVcBg;orɤ9zT!$UL F%Q=5v* }n :,  z[@Z:qҬua 6cK)IcT8Q$7XXp8CD̋SqvHe<׎Gs0"x2J}H|'Ф͐Z vakV`Wv$0[RO[_/۸ޡv²^*D*&޵Ƒ_iemxJ̈ gaw] #c(&9c n-śJ{Ⱦʌ 4yǭǨz/Mřu)q9Vqi.e&dKwgm&SCmv`#iR#q]LbRCU(*.,@s9'$?q(y+ӃrvimYr-Y/?cyS>zɯJ 19HG.԰{E61!I#*emڥڞԎ|bum bnw]Ⱦ.g]^LuqͲ)reԮߨv5Z}~(o{oFۗ~3ֻo(R37E_#\a8}i3JSqu?/f{`>1TmS7dٷ0~SM~X{2GLZ![jAJ9E(ʵaMS vb zGL.s8]r~;m>'}4G)`~OTr ?*oS`O)X?v_*SAKOd;*gXzhBu`~ @{w|$A`t@lϏO_uunj]@^8ׯ~o?#^6ݻ;oH)69ġٻOUv~΋um__ OmW۾Of"v22:`{/竟6[}^rB^zq;^lѐa?㒂Wu)*@]ӚgqvuMeP ]}޺b+P^|,᳤9*9QU\S [W^RGJs-ạ+O?;ՏYh/9K;vm-Զ+օv_3>_0O#چjh5A]4zExZ-4'#aJK/}GJPRt+KEV՝at Fѷ5ma(=-zk ʚ=+ ] \Oc]쎰޹9\Y?7jh'vcwlc/ո=*K nu>u|1ۧvQ Ϝ+y~ ʎ}j"d{G9_.pVÊuzpu$DI'vsod"6{gϝђ?wyykFzvIܫլүŋɠ%#( '4Fu"By|1,3iqQzȍ淙)8zXˑrbߏ?$ǖ J"26]&K?vrә.vvE'b*/߼=Oη ~DŽ軆VWm6ਸ਼p99ӓ.JE]B٫%S} eFXWufgfMNuhj^cu*D*UENkO9WS*9Uݫv#>_ctlm "V7; FV&[o-5Atlb(ׂ@ M\FkUX"mv>7C(FX9ޭ-5r[\M %ehԔ1bM"jG,:Ōi8h춸\L5CѱE $ 伓};"Z#9yokuuh(s@{Bٜ"6h2ѡL2:玩(\ !ƪjawfgZACg@chF.m'/YRѦX•Ynj(`ɘu!g5Ÿ>}j,-hZLM%9ՐJJ: @r=꜌%xu}kt KavnE;IFBƑ֑ ~XO ҋJsh#JKk|K]M`Ӵh^,T Vէ7Օkʧh(8S'_c +7%cfT.Y jA 1Ȏnj%x!Ԭ;v4F%|[(_ "́ ]d0`gdjb |SZX\Ǐ:685E @rܪCZ]ė 0t XBJy9Xkɺ(2VCL:vM\s \/S QxB@pCg*둔YlXkYɉXC*D@ %X_;wMBAq(L)s` `; []!\ "U3+5fC2!Y6X%. d> A\R}S:NJo%Ce:*UHPcYLB2?+dW$+M֡\֕!ՠPwVrE 2nPFء[@( `Ix#X/v r.0 ֎ AC\oS/%JC%::P%@ y `bB8)VZ h2ÙRPHq ,s$eYjȲ (]79 eBrFA uu2R*R%9_׽() E̾n:hpv=#V^n!G%McHH/gNΓFjeV"ZJ(2KMȲd\D dR{4l {xWP>V|p.#hҼA0(FE{9qc9"94T U86H;,'Q"_ {0kb~.?l4sDeE-x :Z^umz`-$L>%@uPyPmJ_6` J@+J2Ba1:8$;Z:#.c) >@E&rZed^SP>x L-P&>y _6 5Dn#H܎m }Ïn/d!T?Q y E*֝U lZ .k$ S`M0Kv߽ܶ-j<}di*Yrѡ" ,'[е 2"1wPڧ~5h9(P"Q)_C݅Zc azn;H H jcYf}JhB;SIJv 䁀:赈 )-f9,-Df broQGj3[f6f6wU }6q\8-=y㢠g,@lh8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8D`8M 3A/J9:ȕNNNNNNNNNNNNNNNNNNNNNNNzN ˠ}rYq@ OO ;'styh8h8h8h8h8h8h8h8h8h8kmVea\ NW! qi$5X^d7n~+imK,c+f:vE73|db%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&f)&A1Se`@ xw%@O  Sb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&Pb%&a]Opkj$ﳣwj[]ׇfr}PۅgU}g$ 3q fH q)KС ?vRD\z ĥ+C `*+šH c(e\+X`z0p%P*Jc(%W [= )Ww&.{`4 &Ə 'u1(W .EP*JKc(" "\ )ʞNAGDxs JP"Я>w6O;Fø?0,)ףɹH3osA GB^ ayΠFz}n,UaTQ&Oeqs:'f~L+{M*Qʚ L,-yF:-:a-EeoןeG lv8VkcZ)fvwC3"ȥfEҹV;XT,xѵ ռqŇFoS[G=`]Uw0V+9 3q(BrtJ FGU K뱣:˷>w?;ٿ^?2h?zs,{E,5 N {{ CB1 }kg?{̞uWX^[/; ah}K=o򳬗]|[a0ώ߀DiGLlnkti (G35? ٠?g"RC쎅WA-#F8,8[B-5=-xT+biRJB[y<>WR??IԽll nmnQwsBaM=6Յ;&nL_@9h<c c2PipsF.5q %ݦ͇@QͺkK1ucA=Ӵ|%КZi67an lS@}0GaWRw0Mܙo@[~sw%]PJni#ahM%p&7kBxL"7sS嫋ٸvTDiA{(ߴ>˘m/Uv5K?~OPѪ5֌Zg@kM6 8^ڔg] Ԗk7{9mtuuTd ?E[x@au:Od⸵iƴÏ>Ph*AI8q>;y%/טPJ}niF)%3N떒8U[wzRNlSd]Y"3i0#O9+{XuVL%C/;n*KBe5ZINX^CtxdzQss3LzLw:x=i&ir@_ 0'`2TJWǞ2e|=/JUXNjWC+J4WO(lA RlfFގ?-cUPe,A0 R̀dQx ;.NVjWY|>'>ca#|ZްŒ f{ݺ+d#z@rx,t8>Nj:Z [s"^<'Z%+P2(Ʃ(*ӎh_# ռ)7<.k,:e}&$HɵfHB4iBH3hl0der?lq6sHF=U"iF`q[V<;/#+}@,<+KšD\QZ{S+ŰW fWQ\q0QZ;\{1m \ibWQ`vՏ޻)%JO'KC%kègg3EGY: Yy=%nTYՆl?o["DDew=YVu/REJn:Ɓ:ԃV]F h:q۵!`$VЈ#{SPy +Ҋx r)d,~rEPv Oc*TʃxϷ^j\x<,L,NT8.UUW<r)e4m\]+^o Q4 ZOߝ,z6wҋ :~, %`剘SZsˋ]&zJ-[y'sߟ6ܝt Gp6 ˥eBosX 8\X 0+&L*V 3 ʘscs˜S 7t>,rNXPۿHLZ8HdޙV(Vc 3"APg@̡u> :KF5? znh 2cW uVSW`X!39]`[X?Ǥ4meG-c`q .^/cQU'TFD*}x,1_ %hk(h16LsNHAFPe.BRQF` 0|% nFz}or\ xnqY/WgV7} =BwqwٕrQz¢^\jzdfa$?H<5j.`a1\ٳׇb >ka'>Ft<Vc,!(t&A J1u3\tfsa' t^?[ VN[lB,bR\w')'Q¨:_˾al|5&Gc&r؞/~˫UuIQN(A䇺tnT6)gTUǻrwgcv]W ͧ?V}Ko#h<,1߲sa!&ƚ__pS5ds5Bj&qOȩhǣŢg:Ukk%ZT릶:U΂;sPoӒu6 ]YleY* ЉwUaQ8s)WG'_~*|?߿ū7'XyaDdo~KZ:U󶪆USŶZo#f7{}Ȇl-Mn9(}xȿ>8'(,3AkI&g~py(TTɕau"l#յ)ee6-}\o57qkQLe0IZ\xIXy##ӯ(DK`ZXPXYbSSC))F܋ cZp d\bXxDcx)wVs'7|.{:^xPNXR:]hӞH!I%-X;(I{:l99 ĥu*C>l2 ooٰS A TOvH(_+"ՆS8׍)kݪ^}ndUFW6vG:cGǬlRgq:D҇oޟ_tdeCb)"zkl d4wY,h>5KE(^#k~T:1%hY}C ˖R./~?aH儽ŲGp_-vDByi;AYrli׵90/ZXrmP&q@ )&x a~0Ms/7leLmZe]*+gr_T{(SB3F&.0KƜ(SP BB8"jO C`pJ#66HF_jRz] %KRUe?5qn#ke3|^]ĸ K|?UK fi??7/3c@lo]Z_/20w+"ƹňm'Oâm3tM;tqLMޑ?jSk\F Fw2B\ovz]]^w>7g}Ȕ`mEg)fNB4ؑ&*`1k30IKRP/F`,6ʌ ;P10gj{Hr_v:bsOHɎs~VŎZRv:pC]bIY,[P +kp5vkN?kHzO7Y^~8XhlR'ۨYItTQȨsf\ ' (O~@)P`Qƒ}JhEhK'QEڔ0%e;b+NqR/%ŽW7 SΑ^= T`ܾ)Y0^dVm&!@:"2#GGo| "sVx$R1DҀB$y-&))RK Wx Bmtg-v+j0o-M_Lq^6.SMNNʌQ̘a)wmkO%o?$d=voT6fh&KzTʇw2 3^ #eșdei9n+&Xn451K^@)@mcv"RčDεLFeaXWf IƉPʶF[xV[J&ne(J\C?9lo?&z՞kH{դT+E5. ֐Q $G x+)6W0tV\hAo@gd;gGx8{XM:N~`3unZu]#3c laKa܈~|&ŵЏt|*n#͢q;aC3^\L[ӆ/+*o"<篛>SfY a1d!Ec&'ZSRǬsd ҐRiJ?./|Ox XY۟?xG\/WVvy9#[1S,ɲ)'zR%! iAaXz i1ZX"˝a4 `bㅑJz&A0ޣ9dӉ{s(P(mMU҄MEsN$BrJa4 ;_hJHTlsDarF wz@ypyɇ5&rF'S#t7P=3ܬWnxH3ȝ%5 @q&X/)mI.)21iIfhxcMO-ϫ<  >(II IgPpTɘՠ] \{5rr~mQnW|vyV-hh^˾,>.89:RQms옗fYgvv4L#{y-nڒ n%yR/nseGwZ쾑fv= ?_ϮZb Y.z#kYi:BЛ7b0Y6ܝc,<ϛ !_n}?c Ŋm;E<"h=MV{:=q#8,ױD;67ӋW#- o .kwOMhs'] 兡nYV=ԍ:,hLv a)B)a#%e8 EqS[ҹk/_x_qMi [8;} Sni:Ij~fnWӸH0IFV;m7:Ǖ>h]V}Md=$ҵ] ].XtVn<][{ZZYst߷+dI ֯[`VW'vt4dyz3DLfoBc-;p턤qr7Y%3[_YSw`p3pkB (:H;0ܟ`bUGjNؓx,{wֿ7ws62ܗsWڕ ]"h 1hmA1M ,›0NZ!rcJL@(E-E+Vco7󪅔1~prI#-!l..*<ݑ>=Sij3@qo"VV"g҃GNX3VCZt!XiB*JG@Rdb*(:cvjA 2$M1߸x Ff-O$>xNlgsRW9c.0*O&|KBfeBlX̄Đ48d$lƒ>b>ElFZEuhX\W`4iTv9c߸ ,S^yeО{̥g@ȏ Rsǣa2؄2RhFS(PȻaVr`UO  Z}V9س4^<`ܨv{$?v o䇭muI|/GV=?13?~n2]/F2\Sp07a='#eL3|ʀ x$ZaB StJkhUP2W ujoHQY P''c]Ay.٣aGKYJ[/:zʴ<ǃT"w%|8,}Jo;rD Ltۘzsv~9mcl2 OY1qr1M'4ԂN&M9C/?iʿ9{F~XB9k7u-P[NE8;{KJ>+)Q\Xէ?.o+8Kh|S7 3oJRY:]Y:SG1=bjpgz@g'_넓*k .).A,c\ΑBu@dPz?m,d ݭǓx^֖5X wGp݃OxKlߐҪnO,GWt\!^&oz\rي0[]=\i*zHg|˹S̥۶zϙZ4WzJjeGIyU-m͟)6ץ JRZs_L +x)I*mWuI*ĥbLRTU'Iev]b,h^l$؍yV>0^=Gr ؁' vl7)Eɥc\vBGeuрXƃOѺ$Xh%Y1#~mMnja9( b1DXFf,g"s:86I =Ee@x9Zp$0fL F>CMjx)$$Mfdޓ<n-8A"I.7,)eTȈ8qn]uҡ}YT1CLޙzk܅dfl+kgbެѷ}s" e-~ߍȰ}=QLJ{^ KAL7e6-w c+ƭmTkM>DYt(01h[@cҤmyi#(p.:]-pMAS,SP\٢Ք$ kc/_~A'4.sShux8'tO;B z]1^=^UB=Y㛫<߉ޛĦ[P"hdҥЀ%Q`Y@a92Zd ͮW D3Mφ8]dMR *`iN"4mBtV>4xĽf+/cɐuY^g+n1{m{[;ԓh\xYc6%wi@a,gX2]B/1o5Y>eb h) hF W)hSZU ckZi-RG>'^X(@Gɣ ιXd~Kk,W<+I VdM'ҼYM[_\[Ml)>[EaqU|DSq͛}s?\ʮJᇙYd fdOMSTĬ@0g\f,$ Pe-@U9Ht/, Po KYT}Ugy/U:z u!Aנ aS$CE% )BdNe7±a-s]vt8K~^^}Nk"ƺ~.Vk9Ncn_ ڣZo>)Wr %+UI]YlͅhAmT݈Buʶyxr|_8Ftit16X64VvKov]\MjINSH 5_a:g0 Aȍc-C(~Ϙ% jՏU3ձDn0$ȳnl~yB~$, S]c:%љL{B \=}E)t CGF"UE+[OW-ҕ6Qtjp ]U KCWQij:CWŮtUQ +k]RW9":c Z+o}Ɗri'CW˹&4 xW誢5tUQ|t+lDw*\h1x8]UCWvˮg)aj ԩJԧЂ㧡PT/h]to g(]`-\g]T';zztU_}0\q6 V fCM ~XޞxnO[x1ygK&ak)\ ӯ?`ts7jyE3G3!PN7 ΤJcۍMW;4] MZ{p8MW4iZ)hNt2+D`w۫v^]{$=a |EsvG`i}ߡ?>j :nhЦDY %C,NB3JNF}'7䰞6eͥ{^ކki *4ˊ!XâF=amAōBҝ]*\ݙ-ֹ.Rr.ov4ht rѝ l;]UFtJ#:%]wv . WJq8]UFt~mL[StUUE[OW "]9n_\P"VU{ mtWWoP/!_ 1MStEpUEt骢\a Un>nZ*[?aI'Ղ'fk ſΓOq2r,8mhʳl 'ѐU3zqt6z0>dqR*}/Ǐ.tNoӂwL߿7| {xN9FoH(NqDpߋC^{p+K2Ղz۽4%E]л~|Ƒ^Czu}sނIS 9FY]H\*,\㼣n0Ÿ"[T*DoH sx^퇬zZhMQɥIF~·JyOߦwqnJH>.ԇd!M/̗>rgՔݖCϝq.W)5_ɶH4ޗ'[S"o/&x:?'*w6w@Z˳s}}R ڵ}GWo*BYК"r!̅ElAҿb@LRXI)K\XEhS=Ibw&޹Ysh|&ZȤkX|;^uB&R.`nz{-}rxeKueyzحi4105R=|F_JUMm.̿s=OWWf=YR=~PZgB;-qk:-k>3>dF2C5YC$^{S,h!5/HiWQ-IڢZRY9?}\/]u,//W9(D% M9x#jZrIU x}A,-JmdΌ6]ؿ50/j0i2ٽF1'ڎQ.%oʾP&⇋݈ 73ylqKѱg% Q+ʤzSf)M f=m-&=Suک4oYۨ6`}21xEQ`PcжH5\ǤI9ҬGP$&\tڻ IQ Iz )DN)xB(yNFbcp3^nPt Ż۹&ZjJcW.?IrxYZgiHܗ8lЭ6cvV(τqgB3!~k{z{C܇'ts'lUb]0S,bzz"J6jQk23%5:tMϖ҂%-"E~c[p~_' Mo&\ /~Sykє,ɍM@NDG0cΊsiO2W}଎`'Fu /2@V'؉sQ$@bC]&AMR *`iN"4 F+DgUI} e31G+jrAǵOoCzgqz\oK-; nRDRDx(Xw.%wi@a,gX2]B/1u.[.Z.[.[.8x1&q.QgU ogUF(X隍+QZ Qω-0Q(s.Ui:18C~%&u;rojK}ry5# UKIZ4.17䛛ᇙYd fdOMe3XY`NEϸ<YxIʶ[ֹ4Ht/, Po KYT}Ugy/U:z JPHЅ)BʾTCKSD#=%n"'c18[z|vt8K~^^}>/4y_{ؼĽ0ۗNQI)u+Rˆ@*Ӥ.Lz(ZiP c*g'h$;Bu]}rd3U &rRi.<$ x-o:] 2nmخB(<ȵiaq #w3:͐hkb挗 |ARIL fbr=I-ٖkNZdvX:9jI0Dm@c.J%d k>i-WnJZ fOkE {C+?:&?}~g+ܼpmV^1ZE1J_3pt5G-Bj/.涄Fg0=BG! >+.y$XRhTT" __̧W6s6c1ii \VȜNktq/{n숨EɨkL&¡^QUH!dm2Ioٻ6$ eq&U8M6 SbL\iGK҈5 &g5UUUVJmSꙎT̓vmd&!8G*00㌵30{-#c!Fs+%mcuΖ_C P^//'~ F6ʃ@ja# :bo<(-BXG'xabx!8gjxT>s>`.0 J" T)JkYQ[cgFu&EI&#dTGkrT:str=LI27y/;4v^&J)snȁ}䭵j7oDr-9J̽"3* 83ȵ{J+S*HnP2zE C(#`zKG hW12#!l‚93Tۖ5v6[|lak-e[ppQz,y )ͼA.]{wWt`02?=p[AB$E Ek$8Q"ƌSe"cQ֫*tC 'ZK٤B: X{!CJGtH .Dj^-v6[l;Pvkұ&Zj=ئ Nn 0ͥL%/̙(MD sRKCS"Xj#88f!gDю @8c}:APi;5v6Ka<ͶcWHh"jlAε6qDd`2X$b`ip Y Lm4Al8XG!q:zX;cu4(b:qQR#1s bg DCm.Zi5).";ţ]OVqb#PVR;ђ"1T_( Xkyb[]a{?tڋ=EVX4\9rjY9JޜwL~dm@Oa׫CX;VqaɎt`'-=2c߀R~Ul$OE)<9Ø2&(i D.xn}'䝔 Kj-X伏"<)R۞Ǡo)As;? +O/K\'̙J,"ņ@h~A KcF 4# Sޤ3hb,b|4 G %,Q"zI+˹щ ';S$ҡZGV@-~ӐwSs ܬE"p PdjPnnA>!̫0XkJ3wD1rD%]%x$N}4<dz鏦-A"u)&B)@VIQ'C+ܲ1 ō1A+-Njzl2Ykp(Yā`!!TSַ˅7(kѠm8ԃ# <L/aē&7:} p3v≷٨-'͸&  a(XI8m8hnw*aLs)xג$QD ~RZ$$4j^)D^?;|4wQqi~PΊh/3ˈPCՏjR ]^&c 'I ]I&mGTw.nKУ&^0Ce5Jhw7h44I\AS;zv~yu uGMvrG4PHQ4Px){Ѹhr{RqV~MO—N[M"TZ'z" irjK q]/!8ÇhupK21_΍~}Q*]l7%ܼK I)fhխo:u9YjߊG[rv=h=G s>Vefet[WXg|f60y9a@[uJlW]ֽ'[jo $ED_/7'Ե3RI y uV? efJaBXJ5C7eCvt6IRgɛn\. ET |'`9kS_U^1)I#ff/S8һ2o}wfRe;\|}C7F_$N>IעuG<+촿VyѸdYP&,yBwہEBzqiGs004E?D*x2hZn K\b<à|}ŨK2zgi$J5 `v5 |7_fQ7CS6@{|6͛ww͏ZwߞԸwn>̞>UU*|gEpEtL qYWk.^[\=o#uxlXnt.3Ku]䬡61xo5<= f3N++6K.}рyMḤcK2\tbK K$y8@s\Roeg<]1{9<rL 3p4+ >H\DDUQ0aR)0xQ(&^+F/I2LgChM6vTޮ_xc`m(#3A_HI_W'S~|v[1pBG);LLL/~qωZ @~[U/n(]{*1 K+MCy'wr,o.Ru]6vڐc e & ¼&BP5; aUTsc%bDw8NxQ 9ےpq>p BL=lwC{ a)ђWل'_;ۏ `4E:*;aԫBtE/9ٟ{4ݎջp݉X1TChѥ&0X+#%4%"Dc-sKQaVDX V#ap)vt%I5ViǬQFYJy>$h,x8z|۠խḛ4!(5GuH2j$B+&L**H@*(c!_uqYNŽ1mC3M8  RafXTqXIRF <Y@p/0q0~neF-тqAHY@3VHlH%YM] (39ֺ?)q(O35m5[ƴ"Cqv592FEA#5YGOU!WԮ"t,&1`t5Vj49'$RrBRQF`"A+/η` ͻ iie+ ~#giM'''WIREyz²L$5+>C.킑zV]y_~P ܧN|2S|J  `;F )(uMY+;S0W0Н_%P0 *:mqۭz"Wa6X->b4/yuS.&D7o5lzdRМ: pq& GW_.HL¿f0O\ 'ɳb꧋_Տ՝pfF3W0URەőb8V5AËSE C%ecKUS3fVCHEQOI]nh8wtl՟rL4J_զwL]ZE0ב\ hT9E2'QDgE?n?q w_o??9&o޿/`R6 ɣYQ (n5wh47o*EӺշ;|s6_#W,}rQ{;_Kiy#2,fMJz ?I]=#jp2WTɥ+M"gib U䍗N[+D&3ăb$iRO#i "a8zIXy##ӯI(HVPcbue6^/%Q9aJ#"Ht<(=35BJX;(9鴲sPuNu| u z;캳s ĝ/xpi;ϒCU.vV^ݡ ll=iD8]y Gz(KxBG, zLrIDwop4n=]i.sW63u84<0;vcwlPczARLwdhrXNV4GYGNJ'XFfM\`h9Q&R >U@BJN lJap4J#68R$#Y@S![TknXe[[cgA:$uƲ=2 |zmxRr[K$otojwx +$QU`}ռ\$IH?7d ζR~*Uy*R7Yåhr(9B{gl.Oؿ?|ed%K>LA/2gQ,c'\UХgy[cg'=;Czou\D>f1ܻKCGgE U'/zϫ?vz5`?ٮ#Y"8O.`8I<lɾ6cԒg}WIđ(qV,NUU]!i%Gcuy .etkE%= U*VϠ# K,8Vyk μx 9(3> 5yHD)gB:2  N\A-8!q>'V+Hst亖RE`H F#7" }Uz :#+m:h۾ڑ,v^'v&Y-@$CōW.־J%x"1qW ] N D s&dER@5F5xk'TN61R%MJ e>lj,'='QoL0e :0+–tJtgbWWS*S[[ p{^u5 ]AEdA%UI@EJh^vr70"V{R攛ࡳ_wuo18|F"B~d_v&gRj]) q)3]$35uAM}CsnqCuM7~i4b|ـC:c'/Z?/{ư;2oOUN2n$!ڴz8頤8Uld1q*@1&J(yzqBHtus=o}0& y~6Ԝ'Zm?b2v#g5ޱXcO=FS |;}^]oe;/VŨJ 7"hi\*!Dwr>0|"HiP,3Us͋YU RVU^utU% J0eAtE]9F5J+B:Y 4ptQqv Bʈb j!K+B;]Jz:A2B [i+l)tEh-:]MNdL s+ˋѮ]+B)MOW'HWN XAtU" Iҕc ֫hd&el7 K|X.N$^Nnsᯍ{[OAdū`\*k>_~o~gYYwƤB>F];>mֹps+wԵfssKǙ.VBePn^06hOC(\^Tw&lQoho):uFOPrxX.뻮{I63nOgQ^֩9ʪ erՇQٵA6]5zWqyC[^4zWޏxaJ?L_2LՀ45݄cVkc*i4x/YaHl3VU/:K<2u%#sUP! )QB0پve ]ۅZ]heetTƹLbbv kG=] ])͜,igrJiΘ+k_mr"cӡ+%`c+W/+Dkwp=] ]0vuoNB+;OWR鞮NvɂwEpU1[tP$ k('Vi,ҋ]Jo%te.ۧ3J#kWmKqT5\yL'h1N[4]ڻ"TղlOWznb%V"ڕBWڮt á}| f@q25]-\DXbh*Q MZy&4}4-w7Aedl4톩\@A*VUcZ˻JzJ~qCWI($m^IҕVi]]!`ˡ+ YZ ]+BDOW'HWq "o#{("BW֪t+XW]!`a9'BwDP~:UIt,$HDk:]J=]}5tzqt/`~' ~\jViJ1c+s30DWbKb"NWk`OWoBWP]`m+kY)tE> NWNuI :n/`LwFA'[0v F yEWRhZ)Ҵھzz^?S}IqiZ^!ڍK- i.;Ů8U=O+) ].WEy緘%).K!XAtU5 J+Dk:]J'z:A܀+ Q{= Zu"tute1k +E9 Vw=PY'IW˒r ӥu"׮NH "CW|h5vE(y]}=t{v=a[ܢ>V{;Ǧ6h cG8ʮE"C  Bb+]+Bz:E7Iu.V&V\{&N cvʴ%$&bJBXiQZz>ABF.&h== _ޣ Z S$YЪJ~Q@;"ڣECٵ{MUMVU)a "ZCWc Z{{Mڡ䢧+%9%x1tEp}'o;u%Z=] ]i)) vp5BWBtute~/+ + ]Z+BlЕ9U]YdA~FWR]!Jӡ+@"nCWbϮgLi߻j:uG>lV#mC):-ZЕ]ϕcDW+=օ)P 0c6;t.9Li+wсKY14p4Ѕ4(7_ithZ,+%AKz)T *qCg=C듒;]'>ȌzY~*w;PEYm865kv[YGnYe5A 'Xu{[EcOaP^x{ko擛aj~A*X c<ZflJTϹ䵤BQ:ߦ[+}v^צ)4I͈e3.l_qhgN4w=\bg\hMV럦In>ߌF.ռM}W`>߲M^=k~ķH-iUxE{fPb6\ ~]*8Fy~%l}[̒w5Dcmo`[Mays~ݺja58?5据fKixTZFŜTHuHg1XF!\p?m?oE[a?UʷxU3Vጣgv "iFHU\ nCʾ8!A/!{+2R]0=2/ל~cX6h|pK(my__\#O,> U [$Z|f?˕;Um{Y`zYn1÷$z4;_(S|yZoу76}1n8at slY*71=c,a*}9cPkp,"*w{!3 \3:׌HO*@cy1{bĐՁND`*]':lPJ:lSPs K(rY/AY S&,DH1NNZyy!L9Eqib"3voO+j:| qۥ5܁%C}7"|Z-^HƷ;3ot+'? Ӟ;S0Vr,Ɖ282.91 )+]O{'r C˸fB8N7GJTx)54ޜX/~tk4 ېW3Z< "D Kck7pv8K .8n''i߷GO OEсoZg*ӄ!im4;DJ+v?-r.T:ϖv[[;~s7܅#d4_ɖДPѮ$ݓ.+=CGs=Fd˖v!Qoq|DCWx=~{ ~הُFd)VW⡑N}~2%jo83 !y^*{McZ/٦ZU1i>#YlHTHJ(GO2LӛGrbbqJ냄)ھ1xFfWOl_VS/ۗ4\s,ۧ@t.UgZD]g"FZ=ǩ7R +ֳ ѣ լ'E>>V礫k^qqUYĥUG"u#?qvW NX%=n(s\(Ku6&aJ sLzʈNmr1H x $8:8o?-2Y)g3y\@R‹0 o/Z#c ÉIeH * U!S%ـ9zKqQ{IHAl5K=3~Nm0m: t6)ф9g<¬ZRHs',` CmErK`Rk;RHBx,Xp˨B̕P9i Q(8{ Knz2?>[eHhM|[A9~ȽS/6<{3:)9GkIuŭ&$ZРPF =9Ÿ Ԩj| 'ʛULȐy^' TX hJuH17A I;)${,=6&c>2>71JH# SZs:p ")-$sσ۟J4%GP-REi"I%ρH9h4PCPX$Hj.&jLHe p+`SqiCR+% ɝ4b}r+!f=xQ௃7ah(J~+X~J@.2ޫl2QIΏ DPh GI^)t2D}>u>} hd A1Sh9o:^m#ǑkH*yf('LJ׃K)aKp-_v:8n#)U{S slo,󏋷7 iF4Żn GM;\vKs_5Rj), ʉ+qvyu=u{Z÷ʷͅnW/V1ھh[nv9Om DKAȆ$WrJכai˅]fY^Ɯ#=4i2l0n]߭9ηݜPWedܴ{ENZOfM;b>I6.h&O'Urˤ q_dq{+Dǿ ;߿:~O߽_^R/_ *|X57NuLp*9KHgo)ȻO7ՏBCgu*PQzvY;zHA_;N}ʎp_P#*-FU WW9XEv'Y%H?JZ%O9\t-,& Ʋ@H 1fdI΄S (q))\" @.bUI$%xr30T2P1pvDrSRtl7J;ʰGЂo=^UgR}#@˗ @W*N.mjFIs2T\Hm *xVTtہS ^2 hh9&$A%äCm19*o$Rʢ=<38%L|j b^nNmwsPٛU;fU=)x PʃIϵ g 0kCFG h. eRCSDfem<\B&Κ\f*c, Ho"%ȱ:6&⹙bbz)Sg,pxgYR>!'6!rJ8ͣDS.FK $dSfw?2>ImX*d| 84 2Os,f\rAiM*nBMKm,{Yv\LoO%ԼADH|vdmej[>gzp;635MjoUrxZ+%DrPQx^% YPˠ2JXCL+(0l@R:# - =x:$hch6Q-R؄L4g,nXNW) aƁ¼μ]i7Y3Ђظv'ַNMo~;hX[`II'Qˉ9O Q6 )~h̚ʖD}) ܆lgCnl+QwR`tBێILԯT1t[)pvsl7Tv18k\vgh6i"0iqR;' ka&ma kP$"ĴEaJ[@R2jTykB$^+Tc.D ITτ6Fnfǡ9#>%G4<c-c)XT(ttb!6 5#^!Ϡl@8JIJS@Аϸ<ܢWX&4 Ϝ/]D89⻫!_NY K勲0_gx[ 5D!yLyNJ h+5ZZ)9DLj>cSŰP~X|l~FF?0Ȼi{)QJqյ /6|^s`&אgWjZlBe>GeO.댝P;wN`}*B4@ZNB>$U6$ҍAYm6S@rEGTY-FKєgCt2S'oWCtHꔰ&R0p -ɝH@$'cDIsEr] D)3A$ytE%#M#tS`Dʿ&Fi*rF]%hIĢ@s>c6( G@D &˟ʃT}=' KGM[#1yrEZ2uYl{;r>}{V[_m-ˮw݁D 2˼6vb홡Dض[\lQ~Zz$I]_v;'9}S#zHJ"9_chkNӯw%3mE8}7_k|'쿿T?O,t-˴< /ؿ?>/ڟNgcOCiz ˼!ELbIBy6Skp+lJo kA)XŒm3tqb{trN^v7WHҞqÂq4'\\yr:o3_gs(|ӏyފ/ߓ*7{o.݇PO'rۻ{;ez |:pΖ#e1lJKQ)Ʈ;B;C]^SXS0c*H޼l>ߋY8x9]&MϦB>NFɼ_7_j>C~e\%P<.^xh1! mGC{J7G󟿕ɵ&I'_L2iC\k0x.S#'g<'tK6 |l~9><>

.KꞴ+jtĝtwz>s:?.{05}]q- +0Kϔʌ_ǞHًW9$<ޱfjB k&UZ&tV1+9u{'_?/{yfJ.=^h_!kwB;;IeҝfC^`ȿ6os-goww~t=/]sxwo^(wvrU{bw7y<\țwu6o#[e YE*?uwlU.EWn%zk*6^q_fLe6;Ǚ*˧?>&N&{p} ]bĤʬ3N39N;'Ipҝ\B]fdٴ|(U0!"߃EsVM >bnEZmUQɁc&z -־E9/ȐϧWJ{ֽwӹHfa:>TnV^ݍj .>[XR.T9(!QC_6.['yPU=dG󯝁f{ȧ'ݗ<*2?GOW}pkݵ<3.haN8; Tλuks-9qo:翣k9Ԝ4_Gܭ|jsk C [Ⱥ"VhiBo-=6(n{#خB΃]D:@;[aGc)֊_=|I۠OnY,A!Ae V lļ `59UFlՔ6LT HYqvqdPۖ77Q*9THգܿz$8m "bCEwUn e$"eKa;ӇjMW[&J|J $6:=;quj)xFn0N^&zۧKߐYf?-ldГtsfŮ7ط#godF+fQ(yj"ݧ|h?Zțvh"wUVLvV*X_PICXKŒkqz3Q`U0r\dN*6QAYމ#8%W{W&Zݭ=|)[_}L 0n,_/mնBÏގL= ! ܑtRN\]9ϱ#$<Ё.P{Q_~, ;䨢AgTl$ߞ@PgM:bޑ [I*fHV@E 6=˽2yCD|c兜>[>=< !Ӄtmik]Yl4ӗ;ݬ[FEBS"j-k(JZsM3p[Z]jWk?W w9>2tŴmXL[Y[-Rnf$C3ך,5}d;.6 v`?P9 ϽOtj( ҕwޠ]5/3] \ 0jhwj( Fzt% ҐJ lpZs m9u휮ںz];B ]= %+z]HWz{] `& ]5BW s낧UCHWo4xԾzuCnS仕.輅gNi@* g?f2=9ߚ7dpϘN3vTۓVHǷE; cU6Bӂ;M7F~4H~eьa6쓳pa$M$QS,3 /''o7c@]sWBV%av:UjSMfrvxpz*~9TRyjyv" g(Oi`jK8ZwP:Ԫ΃]5V\BW[NW "]YeԀJkWP誡J3ZWoz@t%UkP誡}#Or߂NFz@ ɺjpqwj( ]ЕЕjJPqѕr{PD{gA;~Z#zJ޳cteFz3=0pJj ] Z]> ̽ƊfynIig8Qk06{ <+Bٔwt6nz5͊o׍H_)yo&dpYdk3R˯5:& c?*pR+xz_ao kSzSd]K]n[H,ْ'L…HVnAhG7({GBEY5ʻ;f]{?[6!IԥXz+k^g0Pe]ȖtlM(1F&hUZM* %]"8_@rP,itp&BKa4Cd .kP U&AGKF$eX,ze}cC$ Bt B6.Ęuʨ[Uنh:[H][{ړ+%"=e0$(-šU106ѺFҵ*/ǭRY՘FQ FLqĦ( 2˜BtlLfƪU"bcV,K`LŢ#E}egoOpD+fbygF@Ŭ mPЂ044,:hc4zSYe0Z[g$Km=(vlE  4bXG0=,G!&w>$AiiMV9!k"QXc0HlAy-|\"=onڬJA!R1JxTц[G!* *yg^ob'&VJ * |9.j]A&вF;6}' 5d$'%p(}D4 ;mN̐i#XO*U/&L =AvV4T\`ު*IP3Hؐ71]zYjQ!Z#..9*za ) rU,ĴbL”}%Ex` vZ(.bѩ"Am*S/6*?U,7*^,Q K6b剬mn])B2.R%1sіU٦ Lę+4Kd6CZH,sL-X|8c8'J_ s17mxŊI$ Uw%E, (ԠSV+ J? o VoR5IU$Qil"6PDl`;Iq`WAQ^T}T((V1WX,:2Vc=b hlTD7&֛]E2.b"aިo P ȷ9+.d .ti ,7;Y#<;J…a~Nu;5-R+{/VZêF]J2H/KbpLc%Q[*!$״&9%\[CD A .&Zh$CyptI>edȏEC)>8h@yfpƏA [إDD8'2&/2vNR"'D̿h>`V9À;G%כ\pu3[!>H6}`-$ >:$ŁFDuȎV%SѴ!]NC QVfac:'$Ie#FCow 1iw,篶o0#INZAI%<`Gȡmb#ʈҍZ:w[$^tRT,Tz -R4*^ڂDe)Eַ[Gɰ%6vق:]. D4])ƮyrQۚ`w|wV`p0 (-1"@jcGQ2İ۽w: @ P\wڈV`(A1sjoBl:clh"کCT]"AR5V/Q&c.@-L5gB'еn{i ?5w Qђy7Y 0],}'U|ȴhǝNNàe;6 1E+}!1쟶)Ap-An%pimfSڬUmo "&l96;ƒf5Iڜ%d$-8DԅK)㺜0[ ^\3y[(O0]ghwEBޙ.8Fha P ^t~~{ϯ6PR,@7!a M"EoK3/~Grzfw6L]cbJM@4 ːxh]=:CVy/lSz2\mʫ_}:ޝ70/UxBW~gNBYut5C4BjW7ZW@+PևjݰW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\}Ry5 48#\WM+B&_PFłC\  Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,bէ\Y&_ pFpN^pQW^iYpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b > BJp)GpE\FpE'ݦ/JkYpu(?W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\TpuWQW?Ύ^^Љ^jb(n\JXf Zй5؆zae50(u5U}XTNև>콡Wtp ? u?nQ2`'o6I-WX? W=?jUhHIX>ؙppۏowΉh4U&FO F-JáiRle1 l\69e"0;FG-3,g *m}tҎӼxqY7˥t2(Yvi\˜.gM75K[hm鳱yݽqauÀ'zjr-^~Nf7oolƖV?;{'O_[۶UQѢk!dK$ɓ9 ļn9TtD%͛_O2S.SC"Mߘ,b6liը֗Q(?e2 {21膴/M>%#ZbMW7\ADp?8܅7E98~J U]`_R ܘgtE(d:@rªX]\##:N% $]y2v$6N~# (+jLzG WZ hلR("++B'|gDWC{NWg7_Smt5SӢ+7c^J8S]2"VBW֫ .܏vZHqDh$`V̈v+ 4'vG@:MJi2Ɋ qh:] (#!ҕڏ{EGweɧ3Vk ݥc/uۘ]S7p E pgjjV/-J٫WŮg'jІi%u tJj_ ]ZNW2!ҕS5X+*Q͎gNWR!{5]puZ?PCh*JWػjjNqLW ]?F Hzvn\g8qOt5 [#3]=vqʺJRJW ]\Y ]Z#NWrc{pJIiͤ91ĸc FZUt2H&pk8TCUCӄv_7/Bd>@-W bT aɩbװ徉 p'R`>%*4ihZMRNдVm>-1D֟ReFe&p8{?g4 sTiۑW%MݫJe٫W5z_SAjjS ]Z;$P:^9D1(Y2vDWW]NW@yq ЕRVEWjj+Uj : ն"~tE(#]"]E+cרZz3uR"?x Ծ׮{ȍCta8OLWz)Cв""UCW7VCW@uaJtutN0Ș<œHs TѶS#ڭQ4U&O LHHѬS2TCW"ӧ+BtutEUtM-EƵ+Se*>PXC Rsa deV5paF٩݈NXiJT`]n.F9!sB$.X,,Z5B#ҕʹ6S ]\WyB (TLWHW^[gk+}w|tE(od"]KE˟|~AFd|}5<9gX]~fQ]-^;xyqg ~yEYk{B﫣'zC^A}EaD4)uǰ̯Ϻ.W{k'yKW; 9zq8ZMק @$#66. ڜmZvd*4 ѬO0y}yZry~zEFOTr6U9>7;Rcʙ0[ ^}ٷKؘi?oWx6ۻɶjG//'Vݿ=a (X#`:YKFh}zF(#疇E#TUtE]=JAY #q@6t%Æ0Jݞ ﷐Hq?wC)Ŵt.c^8 ^UCW7Z hSD9+~>ɕj&'Ji"l6iVڊh{Y MZhhpSiB9ӶLCZ͔SטooEY2TR ØJ?0ݎ!.kSnZզJ)#JWrN. Q 6ʒq٫أ%kTLWG?и[h^ ]]Ά7ioN/4G0 @/_h{_}`.7h~yO6V6m1>&ILR[嫣;vc^}=O0= 0tj:L- rwd@gC,7 fڧ\(^~&۩='6]o^ G`$n 6}_џ'>_`ߞs̯//ֶ_o__n[wx`:;mm?}ý_ug8 ;֏yo¹>a8VSi;X'][ 5@L}gAmÇC6O~V6|mNӢ-\Ӣ4wmIΐU[֭k7+U%RͫMp R6~=H2AТb&  g{y&5;C%|t?KCm`^k.lg@1gz44h)I_>O#̛?-YH. |p ml#Ysx<6~was)µTUi67a\̠]o}R f~51O~_O#Ċ쿵76+۫qwmUbЌݱcISto gS5XDo#D UM5e';LgxSkפGacu˶H􎪶fMe?ELSe]NC~bzqn6>S3l&ZGP~tT-59Ba"0^ުmM\`WV螳qx5>j<{Y(uK>o9Oj-8~43ʶPP5=w#GLa7' )>XG;O끖Ns\iaKh; gmܗ}=U1ÄpJG*cG"ر&*K1k6aY-nUh@QAsgр"F 5`T!قۂ7ccZUسyw׷ɶ=>Edn+Nx,m^{]?CcF˺9r* @IcT@ug N ̂HB9UaaR"2j6qvjR)0-plib9ޔM i_EOֲЦeo}4m=yhYaI5샒>_Wtr$`ە]vv&Ȫz=ʟ. YQyZ%Ȉ QB`)?H 2@f"kci8n"8u!$1y h-JB))&=)Q1UHpcLʍٍJ6,̦{b!řPX(=lqOO{N+pvݪ6(hT݌w)H(3 Ek$8Q"ƌ2(˼ `F!9{D%THI0p|;t$LхTrm5+.池v6I^;2{pMmP-2X3Q&L椖&CƑHg,gȅ4*x 1/ k|r#0G`)ɽ&Mx8ǂc_D'DQcJaj5aV ?:!;, z\ӟoqjӇ{N 'ĢaL4Zss@ [$2‘ZC(aZDK"5\Y΅N8r8yBp-2,:#NUT;!fkzlz_nSURw=;l$ۻ骄Hgϭ&W¼X )5{GÈ G^`\p\GI}`j:'酞g0%H Tf\PFS?O$W\{e'ʓ((({;=5#8bxVOP&9K+"#qG'q Xh|c" 4q"E@5wt}iRHpqסΚzVM&pތg+Aܗ\f)ɌA)=5@0`YOW-k׏je{W.+]r@[OA]1]RxquS|U{0_ wH1ToE^6xEJmhERjҪڊl!l c8 5 p2p3^D/`A.m5;dWPL?5gQ_ 'M'm dZ*Wҝ /375 .taԾ4N`y0p:(VaH}5\ |a`w&<!w(/K~i¦zS|>ūWů6hBjLKZ:-;Gd~Ёb㪋N.K_}.tydE_=$`{c[UMMO¬~m}9 ,hE\Ր-u);{fzGm ZD= 5hx,qE[#2-eX縫*LHDҔA* >HR6q TRQ0aR)( lOl>9lo]q$w£Ǫg"eһSfSeF+ڙ60ͧK/~ 梛el>zE+*mp/iU wc5}+܇ m>GK?=NZ+"YqRrvf`dQ,ݥY$[Jv k,_ڤþ/^vpI(% `$s,k%߶<_ۗ|-/pnҢq Us Cկ%AwmSbqŇZ- pl>-[Ek_W@宋4rm,nnexZ].Ooz:6Vdm\v5vJ9ȘkqBʒ@Kͤ(0deT0%;J>fS*>WᨓneQ=bQSNqZ]˽I{Ϯ#.!_3w!{L,c05%a4UTsc%bDi~: :$"H M #XErn)>*Ҋ46!o f$iMJ;`BP^ViǬQFYJy>B-0Y`8z|yϰDӣg+wI!8D7_W =XwE$R8:|e9‚ch 0F"bByJwey p9 ! pe`^nlK^=,-ZJVU*dzZw/9:N?IѱMݺx:ate`dZùIAL!i0_vƷܶ^Pp/(M e؛u%M jP>`zh[p^|҂3<ʐ90ZB}>8$1uלgd DBs޲2Ƒ)sV)TcF .rۧhv;o%ιTJ,%2+2c%,5­R\ކb)]E/kurB?ЫqpW&Fsz[g )K7ou/e8Or7>`vB^ũC2c@T( MgRp-) & ;0ZJ.ÙLҵz6#>KWX>m5KhU倞X*UQ{eM^z.L,Ưކ}m]/+(Sw_KKb2ϵu2}ܸD'iqY{5 ys~w}sJ:ҝR,lu9oЄ8S(u?ϏtN|(޾?4{z0}迵?O~?݇. fO蒎s Gjxan%޿%`4MCGoi-^UIq'Pt+Y^,>4fݗS|<:ѓ>ûuW:^WYܿI1)+Z>߆tAs~F}ڨVkմ&~66M?Z!:zϿ+_\oD'_ ܥ@*F ?{lOoѵjko޵Ag/~3MyEOYZf2; #@RL/?I(_Q׵ǣ8ZOM҈E*t=8ʬ4k8Y5Zdw}M5k|RPF H pVcdx-;m=^;a9 /DB2XPܗr-A!%&xe 17{!d`Bq1]Π4X2B䵄i-+'=(Ra18/ts6|^Eԧb5,r9 g[RJ˨D9o.ҦO|xiJ&hU9JO0಴C.8PK; q ULh>TPVRf[aMtF*r<^](T0 ^QaH[A9H@kIPAm;SP՝)]Тh{AmBic'SP[a #?XFp_]{5/ҠKH=Wr}Lu`k>VKVMtn~]ƟG+r Rc_΅]S.X~'"~jꦅvXPKt^X:F=J{ 8>CQ2va)iŬ J \U3rY"f,Y/,&ev]KCZvL.:lHjQHj(eIV"AkP\&ZŲxVV Б.8]!&%Rz1 y j1\]\6^}Я0ߢh]o~%Ѱ$R/Ngm>ߠ&_'w*`_tcmQrv̾޼]&t Be,Ս+O_!ouv_Oo^hBg^(–~ kW %0^>UeDڮƟFd Gf'_l".5f!$_OŒ@M u:{ix lh#u_d58\Fۏ;޾&#/{2xb?3mI!9~SSf527U&р*c#NT'!c'jEe( [e[- V{ٙ 9S&io\;"xQW& LHaY-`'#W u^XGޣ4SpP ٠($(Xhq}dHKytSb:S ׃|ՀOa<4`<<`˫GW5p2$u{ꃆSḇxe*r'+c,/)Mۄ;vr)+uk1 !*@'FMb׆S4)}mT<{ 6kR `w=r!A!z`X)g"̀a&-͑RTP:Ant(oK1M@1SXt(-cZ{$yI_RDm7vd?BJ$fთ1ɜBz'In{Zk&@NtDZqr׾ksW3*tN L$qN!du$&LIv uQ^kF+Ut3^7@vqwR8^g(u0w$rkx F`3# \)R0U# J<łAR}[UEW誠=4OW%@OW'HW-!"ww +tU8tE(҉Uo81Ε8:]mX0}d+ZRWPB˔A]vzΤ!"(yg=tZNW6=] ] f+̹ ]Bw Zh=]tutŕ{Zrg.K<̐-Δ]]^ST09FoHo&gUq^qpY;[rƙMv \]iB+j;M4}4-bKeVO`Fя,Btu˒8gQ,5#QZY>_y3ZUwVYrE?3wwnaQ SC7f7׻'t7cz1\%"=rxsCw,jw *MAV9^%c>&ٽfY/,0˛b:M.Nژ lU9`^K0{M&6'yaw?qK#ut;q ZvEw!/w݅P#zuŭbCt8ͭBWJ0)LPZxHWZ8%0:CW]@骠ekKuwL J NCtU \L9UNޟ J =] ]T;DWXU+:|^КGs؛ڿN8cҲmc+tZ8Rjv(f[Еjש*(]8WMUdgi9]tUP"Jp|,WFT 8}Hfp+se7!._M3ReA4M(z>k+! Q-ǃQl)>H@ZDqK`fWPWb.@cvg( vJ)>rݙS+jA ?U?u u˴\cwA UA+Zo)( tu:t,]+U+tUUADOW'HWZ)tQ \e&NW%HWFYfDW8 \PAktE(=] ]Y];a2XZv*(]"]~eʾtU# ]\yW誠ևE~?t%7zyF n՞_#nVԾʶ[Еjשڀ+ ]BWmRCOW'HWkҦz)ߑ I.90CiU6BhV.pwmH_.m0ݻ`& M4Zd'b%˶%#u"Wdd/ϵARRbO0;Nc-2h1yP ڜñ1﹄/9qqG\t.̓HhRNUz*eOhy ׷iۺf*|WHT+J'NzY% w!s2K-#B@eJ"+ TN 5,*^|?!(^ i/|Ep޶2]⽩ RE)wah%i3v󺄝PQ|ԕ ۫lnngoЂSf:$EjWs+`\oսOU??^s8T'o)|YK%38' m%ɚH!&)}/ȫQ^|k/qֻľkt<,Za:zɨqTEkK&!61TL:tnZ: Zk<IEZ&A)GY$b(XxЖ eV1,ᑚ@' [ء-&~n|]/6whONn+ }Uoj>T0 i>=S).(噾kFP $^I=Khǖ&2C]S&Z"6+;$EPB(D.D+="%s@fLz*y-8-x#:&;gik_3ZvF9LQ*jjtyOl-amĽ->E;ï8 =@V"Lu_aBhlRX]G]a+(L22 D OQPv(>6q(Cɂ |J%D.A@[ p"XcMAX*JGuvqZ xpR!"e`u^H9XvLĹCv`nZ[~turCC$bM V9% DӀQ)d46oLImJ.4O2lt ڲ,Uh(Y)D`S4͸ Nkb|R7qNg#- 0_;΢l2P,KY{:=&pBn{׵^1){]LFɫGvSNL6(z@N,{{rduT͙mRb=Jnňa߫I`r$3I$-8gZIG heޕ)”1!q7HEcLJ4MTqT HR8W)4@,0>*$u33>,f;Vtw͗w&Ѩ oe^{"))NdTF%(/o?_eLۜ[bw6ڷKw_nZ#4qAa"Dj-K3hJLE!5]D%Bf)Dv g+7<<0g*cL^D"~:Gy=Hr> ms8`6Ɠ4(ɸ7\k 쾒Hx"zW,*=R4\>77!GͬL>ڎ՝vKC`lZi>a^]}^]-!_91_\>򫵗ݑ_9:+?rfP[T{Df9<,s }{nԾ4w]oRCG+k}տy? C_0i2Y=xi8=VoF}Կ0nT^Ж~ؚÛ_/&X ![=Yښl^|d8 b~NV1}&{)(KTǁNq@8KlOm3Y䋢O(s/7=,҇4EprCPJg⌒w]BM44B$7 xnNh9/K߶ah9އ|z} ~[HM{C c6Ru4?t i]۝v:%4וb I˾B1w#?l,֘)4(O)Pr%N8}Y]⡚cOM/e:|Һ& *5 NqOkgmWv,7}ͦԻE.~r"^X|\Wrm{.Aț9-k^ |Xzq T0,SdcStVDpHH z0oFGA?s.B{raɅHQ܅Y}tn~P&eH T8VOAkv}nc@Kz<=$v_; )ф9g Npg ǹ*ļvo.䞂%ĹϰBӣZ赸,:`7|sһ^MW澣1g9;UħGtA9x) 4!ILPBv&j@IaaW:^hCoucW&gZEϓG)Ѝ:iCdMy !gJ{H_!eXG%xmϸ13Fe #OEʤ|ȋYKE*Uba*VEDf\@?jrB=Ìas{n2ǁ1 zJ%ygO$ɽ%ޛD]rw?r4i#9jK%$ O 9V'IB A)R`Rs1VTB*k|.bɛz8:Y (% ND4QK,Ϳ܃>m[39V"[yN.u{Q5y;{R'coMWn%ຘܱUt9eYMqnCQyսAv?8=Y,1{>=8 !CP #9:UsL\M.ϏkkIUG'$洮F:<^.+췾TGO?e(sV^Nūޅ-IWߦ+YxupNw7 =֫1v$b1+ʉϹ\B@}]{ݽ[TWWޗ^<Ͳbvs0GUrۥ͑p4op;w- VW5V7f'ˇ#:FUkjѓE}/re%W]٢K'yS=HnXhs@(2m#UCğr/>t ѧwo޾~x߿w>Q?7>/ѣP8ZAGCQ4y[MͿдilo4װAӦַvvQuvo/4Jc"\@ms̍<- q?+\f_r>Gvj1y()Db2AyP ke%cYoMRg-ך!&ji.*`/ JI` "{*zNAR)U7Gqi)i| %ޗaΥbmz׿\~T_59<W/]{7 ߯zAAsl>TjVNwkNڗJ%Ri2*`YR P=8(sLSNW/*G|˫'ׄ]ǏGGL#\g!Lr/oVLxJif@uQ"#JR^ggURxqR)S/^NTI,G"?+Ux:JT<3 G :Z}T~Uϳzoe=Y]U;_IE'@ͯ4%juPzx͝y"x/o׆.|Z{ ]h]K`-rr&nL29tݎp F KuCyheᐄ&F3 !4 -.@8N\Pg,zJb)%1Mbq[ Q.)P.1Y)Amp65g?uSթ6M\hOg[Jlu[JDMvTa_Lx,39TI_U0>"0 jṰH0wjn8˖5Ȳe]nFy4zxYc @ /gV9![4-PⳟsU+:@I 1jMhel10Z]* gqa#8N$XKv;-;4$'b<ֹ֗ Ҥ"|8%OGq(E)zJG<lZ9HUHEASnP€TnCCHQsXh|B>G 84S9SNq1ʹ &Έ목I= 8p"p/u6`RSɣLj*pA p$Tʓ$n5hcإZ; C3ж1wouہ{F` i c'uL"`,NE"3*F<;'[ ب.~cmӮ qr 㖅s, Nړ9fgֵA9Wt M. Z^8l!)a ek)V'4CGY~H$C6W"8@ 8 +E$AiRí8}kNPUʜ ;W/n>j ;Y&'B"-jO@!A$64YI&c39 t;ޚva۔[+ /t >5ni̛;ξٓ}eպJ,c4lx[ X YatXHޙ2;4 ԍ5 5T$^u:!dq[pBB3c@i]vԙn+,l6A*QZ\^@jPbP&28w)! /cΦ]S̩͜ 8 gXy K> &f( !պ@]Y# r NBcHAic ׄHtBHBy^^txjUlbA#$!}Q;Qo 9Jgϕf gbb+utcJX`^4F VYLFAF-F09.gLiԿzԿ3 MZ y'ncnOs$:=lkt_d pJ:قK 0FZH[R*2]uC ̻  ޑ$q9gю"'d C+9zR1]!2ԑ@N%eL*$ 1j4w|t 2%ngWꐡk LzOnI?oϾuuQ;qZ/N}+/0Ap)oK=(wtК錾 2pσc%QK:dHB(6PhmAX$^DR`{f˼qn9[!ڑe$d IGl ;괵,HOࣴ8W\xzO-02T"!6vExRDVHu D^ Άcg}U(qRK0&yǻZ"lZ4 eӲo[+ED6tf|LnTg-V~$OF8Kx7,B <S3yZ+R-,-"T gs9ug>d'VLJmH1i#r0:z4C(4khtJȡ*7F+Bbg3.>]->ͦQ< zqzSZNm9Sijiѩ (iͩ*rh%QM?UEj`;U=SU`Et䭡 WɶU62J*;:@ҹlh]!`Jtk*e-tڃtQJҕ1D6f2\#BWV%]= >j{6x\M]= -Yt+jۡFJ&B CWmVUFiGWHWj*ۖKǙ˞`\ȏΘө\Nӓޛzdz =sozeoͧ+EqB\NWRJS#DNrLXuU+[CWedADЕa\-mgT2Z`MRv~FχĆC/DՃSއ{*zJݰ9]m;"ʀt жUFKM*䬣+Fm+ll ]e\2Zx*+Ctzc9jd*DlQkmJLWgg'e{0# IW|?#UIveb_ޮfZ;(ˍ\ˉZFJRjH !_Պ-f]]o7+_wC0; y Lؒa9d-^ɲѶԎnjNU7(xc?c/^z];T2=I"/{nov˫ۆuv}5g⛙[ko#j/.^,##PpúEyWcXq~Uׯ{>UG?Bd>wۿr|9yI=;#L|݆H-{\$dJ fbS27\2o=WhO_c6o7pW}sp#jL#="bJʞ\HK(f㉭Oΰm>EܝHr%i\LnKi6r,1-z sg`[`?1DE)Z"5JwT&hfJ hq2Zai4Gɉr%Hqa-6LZŋ/ ji k+ o5 z*PR '-nH kE( ౘ1ù`QИא06T(:\[9 M |-G-pɶ RqҬh!q!vjr a\v/@cU)1汤pWw&1c$T!X3n@#>$Ow/DZW=JG V@֩ X2C,S D>,7!عZނLG+6̶!F%;cku}lkaMnN'kQ;NcO昱~X;O2Kh~<$5P*X -EDL#KFc!=|BiВiKJRd$G;O eg HNb#5X9H*2!EZcC q՞:K+vo'0LgL˂ ih l)R[ =Jvh H0ZX]8w ɪeD9sCV}(͡/ \ a :±e60B-K`uU6 CX 34 0^+ Cy0]fak39jPѮjK-YzUQ#ZS5x X72Qjll(Pa:\[d!Q0.iM +QKhl(MLl Fgwdi]EO=5(!ȮDb@b7CAw^KC 2Pư)0m 1k% ʄd0xf_X+8A.ŚU k!!.A U:T,PPg# .,Q r6Й( _ kf7-VT g@QŁ.єՠWY[TLB=A/:;: uTBA+1q<׽( E̾hpV^$3ΐ_.EkuQ"z,5ftYr! DbŠ bA{BAe!2=XP4 NhN7cEUN]LZ!pBK>}?̻2`O,񂷪`Ph+_<]`!BیZXI| x=K<`}J_` ۼ+#%.IWsh#f6]e i<1YW ]P!0Ao%CJ$JDOk*Vwti, h]{:K`"Y0|eHh3Q:Btm(X:YUBNu~t}g:y'*֝7Pne2Ġv*}KƟݟ_n7A?{.OnX#׹+X}+tmm,zԥ\_Kyρ 7D2RS}I*q`8#(4D`ʠv?]Kq[ Gڅj6fjdž==!Zl R(V_%OtX;:>5M6,ά)6Rn[3W\X ioU! d$;}Ayl1EKp03 ֛ }e`(/|zu^?:跜3~ B x@`f ՞e`SM-mÚZsL֍aγEY0-ÛAYg3Flʌ4Ky 0 9ȷJC7v#a>U&ZhE.WsXTѩzam!eL 9G0z ơzBV&,nsJmXOE#Xѣz7JE7L|BܴE#$qᣦI, u,k[`p 6)6<&#x hN1I XRO =8p1y5oyh aTu(b I.Lpv`-|kaBf@S .z|dNzOQI'a91 3t P@.I`ê5[.UeviL"j9R,AMzWjpeŢՌHOQBqA%O,0H5'7[Fo[{V=hǐhl%=L/bos|EZ8弋PmH8S'9LηU?(Np@}PD$o&96'ʤ H@d$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mit@[Jͻ: po& > wJ:$X/z&&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mi脓@"6`$7$D9$PZ$)&్hH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 tI У[JpIA 4أOG:$AI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$$>Z(~vkϞ|w1Ϋow7lpݵvu^|! ;Npif+6c.MI4t ߮_?Nf?gW_5r >=@]= {d~Oު??G |Ϟ׫]ˠO~| rݔn5ow"=8_Dvg0sbw|} s=~gι3+ό{S`$F~yC4 bfhz} %9v(mBDsף2~ڒ}p1w_e;su{E0;"*;)Rnc7ȣU2(SO'asV9˗gߟ]' }_^}?w&];{UrW]}u~9 Vȿ5ĭGR1ˆ芃72/ceCKjNʇh`AC܎ț+1 ([HW}iK F\|Mfn mJ|t-S+ٖl&\D~D)tut?%+{Wn ]m0Gst^*Ef]pܻ7Pm8z3zCWS/O[gL72?: YY+QāxCt5-5~3t5:h9;]MA9s7~La$[aAB"__[=bG!Ttl(mdI!XR߇`79`^}FȔ(9[MRmK>X(-[MjܟBZ=Z0{vI䊽q1%jλJ|X|kh B \%r∣vop/WIj zo R]D%9l}p%1&b  \ FVN}++J]걏>d*Q4V u]$Z!wwT]{WS0D[cnExL[Q+kg6T;WppuXҤƓwչfQ𽩙-fE2={285\Cq__ApYeDYDuzT܄>>RXw]O*neNZ_+L9*_\\=( 0~ϯ] xq~Mm$]xv՛s^(eF7TV`>O[kƢiX+ƗF-`up~oN?`oo0-!`ep^*3\Vs˙ fgP ) W!BJ/bCY T_*,ӌh'Ly1BN xa0%,Q"zI+˹щJ[rH+!GZ,Rg2DOob I{))Ihnȝ2$3r lFxf6^V`ɻ܀P*WW};Ep7kStrʿ9UZ5VJQ # xq b`>rIH鑙|~ \ϡd2}3KÕ vzBS, hN- `Œ}v4ゥT@_T2{1g$X.IU.^L|>-A$0}PKFJFlW j&gwaR5 i9 ^TOI^3&ȥ^~?Fa?_ b޳gO6hﶅTgεh39oM+n^qK+u K丨t@&1p$KHZgb/G6oM43(~ Bo7W:.xߴ|)y8$5ehiJݿb`J0FLBl^AQlv(VytY~ڠuSI l^d>]1@M{qMePLxyUlʪ+Z.ݛ*n;ݺUZ DD殞f䖚Պ.t.q0q\3 yt]sҊ_$;ua$y8qq^V%: pH2X>[]Yg 8( TlAR)PIE.EV`{N@/q2{֥m]۝96k\ۗN 6ۅ;Uӑ)[kkYߝ끬*t[Fn?yzțY0k='%=K7q_,&y?^pDR;όF}r&7n=" &GӝKnSt͢ԻԵ {?.wKlƻn ckT ˋYN+.﯏Ŝt낏VR& 1A}0,󨢚+&z3ꄷH,`uS*y% 'zo 8D-~dRq_qS~`GN*=f5ߢ)Q9Y|ƨWBd{Fpވ4|R{Pgwwek9w-;mZtID"FJɱsKQaVDX ӭ@$\.%0!S&aJ;f2Rʃ!ָR5ȹc; ݷZ/Q T |-ޏeձ899W}u}@-rF&1GX`:F$(m$B+&S L*H@*(c!52o:\xQxA=L0[&%HwD@@it@(fEwZIeH AP$@2qfNw[y :v3֛01Z0n=#)kb ) `dι(39ֺ?g)_F5HYƴ"Cq492FEA#5-'QAVn*>"uڃXQ cl朐HF :"EnE8yپ9s~묹 WE-6Nr`fr1FdE?qq}R;Nyv5)4%&}75T%i{Wp8LKU;t*iݨ4Rs.9_,{zb#̎fTN wOM^]<E{}|\8jf)ẺqcKzl[Α^1)jpǡxrkKond[3d{3Tom,/CJ@/&>M'l9$_9&Jխ.;m}t&!-`XHj0Tuߟ^ޤ$|SqRTOo]Y^Է{.ptw?}~|~ϯ~:~cLѷG/z0@) ɽI^(t.5ѯhw4l*vunv9]vyKGجyJVDiʄKg˨j6 &dp%.|/iNҨC**~HUrW۴Qpd}PD0}Ugh{Eb[Z#]#1QLee8zIXy#F_Q^ YPXYbSSC)5җv6,!/]r8,wQj9)K?x!Vz]wAbC&R&&b&˥fZGID"! Bч2,&(aVx̤wSr &x9 udXDcZ)b aD *v}2Pg\o]D]2xڥ+;'?{׶F\%*S;b~w^ȺI(P4_^DTROh4NfesPË }u 3pz+*{}Ӏ,0H[9F`P!暇ZU Ĩc&v`=lݠs!hBIhB5 &+-Xui8;Nyid~n~ICwn+?\]$P{ߏTkdc2̹ G,u#cMJfv-2MOՈlkOĚB.12ѱb)*،K޲Lw?YS./{/v~3)Ơb,)4 MRѷKpCz3l^/~unkcj*]E_ˍB R\6qփyd xZwïۂs/IZBfS'tjVZ ~BB'VB{y=8tTZߟy?WR1!Y]bLvsEر~~y it~?gG#*'ǭ#Rr9l846!DO5C.ԮYP ]50&1!cH)Tmk\%L.5f~)hE}So[s;E+`Ȫ,f䣇h :cs1*M5yZ`lM^V,#4P)WN$3I[$XgAr Aa]˜Yv$e/_|JGPqrSظt({=^"cr 6:rFxQju=wH ~Kqo|Tx5F^X`3|u1lM.j4seU0ܷc9:Es5 ]ȕd2rX+pI{Z;*Ngf~tFƁξg_xt1准w(w=s">fs᛿>`ӓ]*XAi9[+sҤ3Y\d+ S}ى/UЂ=n16I )*BH𕯩T.Ge~s*^8k^Pk,q)#`Cakઌ6 )P`𠪂vp5!q4]au>p4H,PDɖ &)XK☫Kޥ;ǠZ9ƩxnPhg8{I֮FK BX7ITQ$P'JVSA@L?sL-MI@4$SonWo<(~q5l&הrFɡ~u0/>+2OVcevK Spl6*+DaSFǡޏ=.l[VWF>Ih/ZFv Eps3{ #gy#?0lNj.P0> (U6%9.k6j6ЏBJ@:NW `C8\@!:,.v;JAvqj)`Q5]BTMd{w;2 ӓyy?J+} ZC:z`U0*KlYP+E'gf~b_B)ϔhlE,]K_UZW!BmqPԖd/_z_(-䜚VC4MhlX%KVh04g2%suY:K|O+WS٥[KAAM.ȵeP'LIpQCN^ 2 ~cc˲aĜOxmm1NgJJjhDߜ->IRZm3a/ ˋj13Uy`H]!oޝ OnLٸm?I{DGDž^e܆eiɣ_w=َ}fCR5ovſ@[W|ry6_#'ټ2m-p~P{o-wt5]JgH<*Lxd~w^uK;PR@Z1 \-QFft$"7HE=b8.:KT99 OJȐRm MT55ȉZ#q_ݙ*g3)kh Xx)S2잒tEJI7V@aU<`YEjZZ-ߪ!o)WmՇ7K埔 T 1 8`Ia 7S{v?ޱx:v?Bbs[䷋!7ɘZ!(tĄ#~ ^cѿ XC嚲0b֖PP8MM;i8y8oymɵ-vDgíP5"pQEbf`lW;_{@f]Ki^&m7@|rOx~7ʑyݳ>v{8O}™oxhw[p71$;ss='ސTU?۸ח}Sw5>61߳US҆͢1l3e'6=^ޜj|6sĬ ;¡XCkb4DdӂGJ ӂ8PM_:ʅch:T ѢOQThK9(}kCYc>d蘛VXWu)fo1) ,Uwr/s'6~yJ:Tve\~n{jnrwk S?Kg3LSqzwo>^FZFQŊr<}I&6gMǂgjq~V()'A+.o/i}S(V}FGrUcsXcX; [[9?7>?_l2QBqQD\s2[i0ʽ 11ٗ9&_XN]sL[:yϣ9&DaXmWĶA9d^xZבGxv'ζ19I.$̶CHhLC#W2.ǨJXrbT6™A֛jjhbg&d}A#u*ْ!*SSU$ѕ]"蹈:0c1GX)]rɢRG`Z(iFL.+ fN2|֒- nr*FFH!q'YDȚezJXU Je8 >A5jA.PX<LUp1ӘZH8_?_pq>뙂3ޭT)%ns̏4@~p!Ci ?NO/b;#gBZDF!;Ճ{0VA#PV&o\ַ[AC蛲|en=\Tsl*Q^cS JNYe5;.Ӻ߷V֧!r3SҔCp% W%&j%'k*d:?%%>q٪jx{03߳Cƥ'<9ߣKw;ʼkSd\n81TA#Jd]94HA) MJT#5A5O.wަ4w=ՌEiS6;N1s<63OG f;O%}mV=ԳbY|ZgƣSIçA>ֻ4Y]&@*c'ǡ&ġ&UpG W6G+1RPd! `o::%.YUbpɨFDLӁٟ=?HbK?BkN}xLكsO ˻c1A/TP vH/TP Ő v꤂F+= ]]u6ᆞd824G550! ,6LQɛj6s|xنivrYtx [ڠ+5 GJlOZT)aSm}2_'X0[t.f̾v/\p_w].}}KobvfbrJG_07@ }`B_0/L & }`B_0aktYh`V7g}ON2!ڳFyQ]ȴfX\]fq[,Naq[RTc L.%,+n,"zٟVsDKc8+1NmRf|سvB(S҃ # x)u21 ,-xѱ:I;r;c=\tIĚDZqp_^Բᱸ\ȓ(tK-—,|F&DZ%=̌d\Ke @4ʂzH4PIU`:_Z /S AX́V-p*S:8[҈H]4*pfB=I4 ~_Ý⍌>dz1(@Szc-Ww2dFd!}>ujRKӃ2˔Cn8,+#3DX1U$(#U)U"uHm!-Ёގc)9*YJ-^̀YJ˂kFȷD{q<(l9ǣ4~YYJlxǎR~j-WBuן$n[(қ5v#nf7iWr03d]E *|(3)9ma 9;8FJ.N&&S4t_0,YcHsk圉m72ҨIܼ[c)VH#N~HW96 ѼkPpzv~Q-$wceC <.rG@kHu┖tڌwՍg]n/5qQuvחǫ]ahn^ sFy889tcF&~rpib'mkOJY[7ZY,oIic|,XVhxut ۫`{]|7;`#{W}4k ,F|Wpף`$w5;ez$F<(WQ+$6,M|0¼F?ݖJ,6]ـ,q>h6^2i-Mفj'_U~ݴ&őijr$>sM&d=JC DF?HbK%t(/PQ饝 ȫmbL/0@mzN,A'3&Y\v:ApQX9ǘVfk@Rbɞ$M/ϟ8}[glN3z={fć/ʤvggCyʭ<cVa;b ,|Pk}2]XI\RL>־ @'+yo5O 9ڦ'E<ɺOJ%)%!)/Zf) @/3 -0۬=*1GOV )xae$D=d߱ԙ8ۀ$̶ftc*pĪ;~yȾthEx]_nK[Gd-ߣҴ$boJqܗҴE>iFi?Ҵ8p|ZVICq"j8ҧ0h5L-dڣ57yzg_ٗfp'Sb"eaELF`,ǔlrY&&recP}]N2F]bp[[i5pB}w_ݿL하Z9>Ftna8U`~m<"Qmʝ\:!0}̱>쒉!fsUI[ƟrNQ8I¦M"ZsYVDО՞WnFʹ}OpҶb,I_B}a7^FW>/{T{SO3^j(TpJN4";t$hB#w4;'m-ǣ.tpnCI*t@@G(}Z DRvYﵳvI5in f]^[ P-!kxE,_Y 473ǣaa41Or#RH@*rVCC"QQE(M_U⅝ "Hqp&AcI>hm" n J(rM S2]#$L@W*\LD^y \ ٻTVֶ<ؖOɂ$Kj3 &3(/"<={ -f&P*XgQ2S,l@/N0kc1x\C&ː+bA&KX`pLZCd`$] w&fGd(Rێe,c8돥UXܷ>oà\,l/iXVfy}Gbd\6eZQD2 p<%-rKh#³ BiC 1H%=MVQL=d9&q P倦S`VeiBʦ"9 EkФRΗs`41rV&*Nru dp?wZKJ%Nv85wk v).]y/-3^aL>q!*B wř`AR%KD\Rd9aPp"gd'm&%$R"YC1A7yAbcyǶ;A+qȱZ/ZK$},Og!|$DPtci}X=ۻWiѢ?01IGE]3, XBvd/uE7m.ԓq8x=#:P,㆕#1Zh i0@ga|Ӎ<ՃJo&^dž4$uj5ivOL/{9*s }"} #% \|rD\c1VoSnQ;9~%0kC`h|YΛJO0UdXdؿ8Y2S,TE帛I&j2B*q}[=8?G?q=Mޝ_ z*>%]SgNO9Ϳzׯ~{QUo5KѸlUϩ:c0S`̊3&d{M.A0GCl)Tg1fFxś0<t~ia8q0^JLQ-, o|#1o"<~v>kE>-!< S?'~ ً(.oox'êpn,- ̊\nb7\ oX&8Z=QKhRd9JO0\F{/U CFj 7_PS* Ξd.Fq@MvL'ϼa7UMX0貌T;o](wW`Ϙ%Т(BN|14^xyz&S=CuZA'HQB!c-c0s; p_3BWVi[\ ݁Gr*e i88p>!O᪜MX)PB ʛ:7Qy}Yb$VKQc2dsc?T0ٵ,aZKMwٸ~֟k9ƃta>1E&B'-ec!-(:&[\]US!wNڇ}](Sq=BŧiXy>b~LUYQ,^rs݇pi"|?g@ζۯ_~O ީkwBzb]X]*ձT*SfIuyQL^O&GIP̘‹/R<ᓿϷ^>!}M>X+rp?`z\2EZuXsU`${3 )ޥ$s Y\,*_Q)/.jeŷ3>T'XEߗ>/OZ>h>ko7S]R|K2-1it[-r޶;P/^]!--$eG,NBd$Sw"SR839s:%R2pڔ ($F39R$L5'F&8h,Yjch$LB4F*w%։rx"B.) sʤ0[xjNΧ0_pOEāfYiY&e)aZw\炱TY^x`PlRf{=:mtn'Pv_Rn}wՔS)[ɨ9JN8u)C&YRj N`mjw>b'xgn;-~G2k$<:×NKޤ1)5NE)XI0c5!L3ekX7;si:fOSE?-<ߓACg=KZ쒯 wv)"qLw J65`ʘHL\R8&J-cZyomy]KӺVEZQRkEce \(+:3I$D(A2NHMrV@a6RcxTIVe(p) +٠+}mRw0D rɈ&uU0tׂx}-&jA L.*1z$}UөnXt@Jt[5"%q"Lt ]!ZMNWtut%et 3tpEgzq!CKt(eߋ$J&e+ ]!\BW+m$~ +the Q* ҕ1k!wՎpBt(ZJtAL ]!\κBWVՎ({+F6zFΈ]Gvo վZy}t%JF[EWlNW=]2je [=.?]!Jz:AbQ}OW M\rTL*:{x[O8^YDA7+~yb2O6apfߑ%%3녀sF4\ԻoܜThkFFw94i΅Jo0=z@e}=ԃZ.G=/IUJq4FWUl̪pkUDF۾"JUWUI L+@7Neѕƹ3tp ]!Z0DzJSՂ8S+ն+thUm+DkW'IWZCB;qFWu&)wKS+˸1 ]!\BWкt({n8 ޥwo0rd.?vZy$BV݂hOW=eފF kBFu-'t(1ik7^pQF(-SKLh=c;u1miDٲV=M\pb;W| u[~[0}| Btkkߙۉjc4 ]zC*w;9 #-KoAJ%A0K,kNo'TPnIΖ; ֥0VhU+8i;%]m]r.yIm W+@ɉJ B]!`v&t(J N$]`Jug hWV~# Q~#J ]i˛U BQRwutU^(]!`՝s۝0!Dk[|(wmJ};w,Y83/38umvNr}/rvlGiKa>Wb^]BuA v"XCgU + $qOUFzu+vd׳+"iASy2UN}KaR-Xz*X1RWX uJ芺huUF WWP]1)o8\1ZHo;݊q":3`m:3\P]QӈV֫録v[_|6Ž Q61wBt倖TI4"`IXgf vhjQ-x?~YU( LvH]e@:.%q-g/}UF){#5+uɵ"U+*vuQBϮ^RZ;2`u^zKih/$@^҆F;2` ~ W[C*ԖP~5+c!ue`wF]!znƖ+DhԼ7_@V;*+*խ_DvFW]ϯi/$NC+.4Z ܮ 7QWW uh mWW@+Ƅm9k٦$!JIΔn:9Du!5%'Q h%]MgjiXΜtRtl͹K-4U tSTgs&Ug鷌(XpֻX C*C(*p4;9 @5D4ҕcNssme$r,_l^9|?cَchʋs#*?WKlEPg(S eq)_ˊb}XӺ?k}0!YoblAO @DVάvDCRk;OٝQ(3J<!>\OxWA/m~kv' xfNyAY \@}$ 8s:b,cm0^N/Wƶ&~t?_NFC`uV2x@X^Wj'YR˶7/q\X>jJqRwe._.n<3ǖj[QVqFIȞ=[.+?4VhQ8IuJXK,J) VJ 93"9#AF QgCä #"@ftT!2  4 A4Ƙ' KGMA<#cP@"h-I(JYHX,CXX,|/2U iPTguOo?mMTw'N}oٷG_a~#?cu8 vTgnKQv:~v2y7(ybx1\|ZQxr-*n``Et);W7}ſS2-Fzm8S=`fQ_`wVl;b ˖1k6?c~*$hΣ#pjj{r> l1Ng!ήwz OvG;:tgh~?Tpss(uvp?.'8 d)t2m}O;jx9 9+vn&iPꑋ"& lx' Y!2Kqqv7)ŏCP\Ϧx8]sqovkY~r? Bwq0dnxry~?(#z}y"U:2B.o-V$jGG v\ +8g֋fdu6JI s@%Dy &gsHtJ;^dj ΑP?0p6 $1AM(7PA$.YfxnqF>g!KC7%GJ8ZpzL80yGu?]g~5A8m5>' \1Vϙ NJY\aWba'ߌCB8bw;a}7ǷgYQLHB>൒%{D!(| vQ,"J;ைo)]EKJ2[P /&b0.j<]!Ayw]JU^iH[]TR5g{C}n9' ձW /fJ{o{{<,Pm0t؁^vO+GT?.땧{Erwudյ. nQLm6kWk}gpקl>c(tEUp9'*]_UGCwkbKKi=C7|?툚k%l29PMyTǭmz/ثf*ǶJZ4W_=vpwX.sXzY}f}Vrqsm-Cv026ACEb&;{,v [4߮>x2,[~4u,> CNI-vLkSw^-f߻&4va23[(J"Г-"-yѳݲ HGeZXQeQ8@")HN8*|"IRjv+"qC:&2-5gR0o^OɍR9KVPYQyzYԢ@۫,Z>#]퓇q,ѪWᄉC}n̡[v@OA'6̀&}xRb@rcYH?xK֦+ǷlMv[EXAC6Г2ov޾}#qT34lrx[gS¬SD.m!, APTM5ǚ##kG֐re &.HpR'&D3H$!c$5k!㍴H]҃&J1\YMgYbu UuȄ&ΗLjvFCf!P`ia"P)'.l6'm#w´ܝU$M@I0O iSPb !O\͵63̶(50g8 )H Ixp% JhH饊"5:@39DBcHA !J9ph$|^^LӛsikL SG,GƓPU'p (Rd]gΤVk#W)2#JC8YAEAz!F G 4rt2[FHDI`(Z ȏQ(@s RGK# y\Oc5$m3Nߋx@ﰽqo{BFpq69rB|-XOL F%=>X2D@#wD8SR c2U5<>D3 jC .y4c|gx|غߖVߙu SP=IڝG ^5U- eIYdt,JiO{ђ-ڣ%[Gޑ88r I9$BqPf7s))\Z@N' UI$%x1>14eZiSӛS5tm} Y6=`zj|Rrҡ$X֙$]IhmO(u> I-)L,ʋȎcB=|~\*$mE(W.DaaxI\^zҼlsZ )H9 dGVD, SJk.Oph^"ERNϲhn\2ya$_ۓg^fayϙ~ 6Yi7qcPAX" <8SHJ4 ;gl$uH%s=qdE'$6Y-[:t_3m-l{i6KG(w]Ʃzt}yX46AB s'*xB+-\+mX_>lF `,'/6Cb}J)ˡ߷jfHQǐ5c3_WuўIYl?,W@)HUsYH%Dɒ<1j}[Ln: ߷D~&m^6:]1 [R+tfN(33ecf%$f0,,הIa;D CwRe.qL28,QAqBq=Kpol;A ,,=[:T*1l]q,?e)6EZ.6a?z;ms ݓ4(h40>}'FlABRf\d%EF'R2*$Y!\r1~U =3@MtI 0ml;blf&LR&AU"g3b8ڽqǮEj4.W)VXGjA-,]&2aj' ԥ2yY*7 A3uSb&(PAk0v} Ff<.$?vED~@">%"ZϭuV`It:y%rȠy x#`e[uPB`')8Yiʣ@>TRTdBX^s}E.Ve+\슋g\\<:W$!yLNx6NG!j阦L2}edvRk{]3#@2W}UfP]" 9RӔQRmjrdwӶ11*|n}NLih"1/q)bҔ$&:8Y(ܲ!&kDžz4=3 ibTYmbJ5‰I7X?}^W-:dv[MDK)۲BC{ٓnw7jޥ$EK杳BZ&H)ahUsKBF3Dhf?{^;|k4O'XhqyPzR˒HWԪ#N&.Fw#O.@Xe;.]UZr`.ڍ#_x>x1C ÅA]O({tLQ'<}Pb)G(76'b<`41:*4xm ^V&PCC>-4Kuok]'w q*WIx0wh+{ T*u)LZk }~5Wք~aFP׳ ~[^G3Zt)y)W׫|m˝˛Q׋9U.>8Z=QKhRpiN1aqs. 64GKb){#g-%'1,~H0WԱ9pwNINIjZeQœsV1ڜJ4!YLTBEN$+.DDRA`)Wx.atQa64e2[f\J ğT0-PD6V`T'< AHK- X30iF(_B?nËJH#xՍp^iM|R!SADV<8gÁ?WtYZG@-QEi&Y$GG$ 8g7$d *sdA1AWv*rAY>xoz% JVJ2X  _fd5w+&Wpw0 yjB:M@ a寪έQcfP:Tηb7:ZQlv?8o »2c]g- 7o5`1edGSrt甇#WU܏.9Qٲ(z! P[RJؒx 6 '9(SJc|)& }7ӏߥbj[ԯ[_B|zzdteZo=cGRj),gXpvrF1מm]4$[T?5w^7>^*fQ$1[nvesOM|rrGiR1s$!k= ì2˛IO&Wxtjp.kԮgtfWSZ`Z,}5'@8G4ݩk:Ut670;W_ ~/ޜPfN^}a<XH$  ywZ54жקn'|>cWج6fa9F8~9b\.Z<Z5"…p=E\ͤќ "]T&U+籆Dđn>ɠǺBIwMbkأH|6uFd<`&iOsDKM4A6(;,E2xd0:rҫ͆䕵{6Q{5 \T*il2`#. k-!k[(;rWa]Na]ZƲu-]qד=Z(wxh7C:x~/CA1Xn;Xo^(nC)<õI\eJ!,}2o`dҜCJh'뿐 :LYZ*qs@dkQ#% L ҫwCa>Kl"}҂୍#.٨hYOꍜ"OiEOG 8gu;},OA4so-[ \kmH /T!xEB?%)l+FJcHZx K"9_Wuף׎H;kG٧) hb:-DQK'UCf  G;φ$o*cT"T"U"T"OO^j+v&Z"#<BrL05&M8^E*t UKI.E6 J Sk ]iT3Fv6 uI%s6x(a߳"-EW}Fр:D9\=`9p0rzZz> \=5\jGo1i2Gu` dKX+}S:n qGw+ >T;Y&__֞_?q-MJݽѕ~ @8wv}S кw1>f[:bMRhBHD.m 6`6 " e{Pb͵)\`_;p7EU},=&to)UDC9 ]PMJa)nZy Mvhdɢ|Eu2ٹN~hiE?5VtˀCsO)j5i)8 ~1fYn^RR23C7 t1yKH F39Rr +j5rv h]*uf#yOep`&D`HEi4fQ&c40%enk -C tl8_X BAkMLh-r j:9/]-X6֝_=JVk|o1.ݑb\`] aUǷ]xrYZu{ZG,VXUTɭeD18c*a3 tPfPYZui%=5+'vc29=1"`bDR0l-K,*(e5r#c{JkXg쉅ƒbExo Ԍ9VˢVEbMI#ĸYIm Qljby(xD\NXZ9cA_=.Y)&W:6 163aAR9L6mEvY\31ڭqǾE}+h.iaE^‚e(ƉvB ƙ<[,vIXȠRVtIDb@qLI8`s-aklҨ_G` """Gωz )1 L5INB'ɏ-qXv>dЌl<A 09APB`')8Yiʣ@>TRTcBṡQFvDpQ{bX W:[}qQ7~7 & ɣ`*pTVkQt^@ qVK4e+##Z{\<.v[}g b5rJQ?lµ KK#ׇo Q Q#Tg3FMTsJ@jzTRJr(F_!yhhG[wGuG{=bWT >LYD4 ylZSDxY}ß3w" aKF;ke JHHQwĠ5ANij' yTJpއ"eNe`jgʬbN ODv.&Gk~]B\U97F ⩍^Vxܫ8:ƕ:V0KSLyd="k6'Lq=F5S;iJ砒&D&-zq"ZE*(t$Ȉ$٨5g3%XZL嗝@W9;EGC/@E3@wVb J{3EpuXfg~r*1EU4Z娵#h J L%!5]%EBf9&v큧/Oc=#ϟy )DVc0FK28\J8W6*Uvu'G Z'Fg/Z`HC1&i&F&~F81 @^[^j m( m#AJQۖo'?Ȗ6@nt 0,]Zayz8I,zx,zɼsVHI10 `5lIrM3D.ZtgO׷O3qy<7\'_שuML]Lo{1C.=Ai"ݵW|: k]T4A.57Тtq?f Njs`r4(}"A *AW1!M:􀜬G" &CX B͝B{ؼT7C$7z˻,pG@Wj|ojݬCzwT-e*@6v¯ p*S*D ܡ9e킆soO\kǮ7($.M nf-~Mx<ܢ)F ѻ,_W+`@:8aZoqc7/U9PVɪ拊h+%Pxdv\bOc0+FLd Gec =99cPtYgz_ r&_ }q>_|#N˰ Xt'iZMZy,nJɈQ=#Jƒ-, 'Q"<+_t0xb\.a6^/1.8xb'CK+ه8U7#w,~@LߡmПk en8̡*UiyJxc۪#ɷ ;U8[-C &܍B$0qj/mCG/J[S2~ʝ?fa'HP?8G {)#oz\~8[vh\(GuN.&eJ-L f7J5ewAEo3 >I9;wS?{ۤ&0/Z7aV|2ͷ*c ˉ&ȡ')fhg 9jl6`v9ОT;_lxݯȥK6%J4aG/\BZKj2K91j)/,uDaRk;QJwϹL*&*˛hIl;qkh3\ig?z)A40<sbuy{V!n"7S'|ƻ$ ƓÆrGwUSœs`rhL'Tr e(0T7IrŅH,00(~e*w*Ub <蠳6T82X) O\@%@ t=TK=ITfHO-`G1`u#WZoyT`3гӰq=iu2aUfIAr<ٷ̉s&{C"Mj "1GO RW 2M=x:;-IVPJR29Dm`2#k%4n/0)w``ͧ 5ybBz<>ݍV΃X `ӣoG[ = M EG8xeLwu% G˳GCKQX1( \I~?Nbk5 vpփ9nro//("}trrZ-s92gU0%N a$IuHB?Y0ynYg %gU(r5r 8*#G]dӨM ̮MX}~k 6umNЩwdV]/Gǯ^~+_o_>?| +_`)0j"AG0?=`h0jhnCVﺞJۼq^cҘ% 1l㷳ݤ|rc,O猩 7QŮ,Ya_[KB//Y8-QM_oy?$ޟM0"Eiq ]#ߡx|%mq317ę–1QqB\ʆ(g0HtLK3E-7EǙ!r_dY-pNg76t!DmMs6:,Ͳh ;5--!zVb9̀BQdKQtS<g*mddx);娧O1Y ˽x "3鍊QzѢǔ(X m)/e򉆿O ),pNei\~9ͬ VPٙJkì L-h>rPFRvd:| w)k-6tL)WmeLyـfc=.G@)p%rˣb24 c-c46N0^t<5hkpԹAO"jܜTOP Ibuޔ17o&tonLOJup? /"2{ӏ?FsyK{bkS-V0SW҃z?A84xL5-^ojzɵk%>< ?6\בM=.|x"~jܺ%[rIXZFW2K.x"  28T_WKI!3&*iJ JYe2 -ƬKJ.)wܶ%eOYON$BdP(˂sL:ܹ@BZ{Iz GcRƏ\"anS18-H'1Fvɠ0Kw5F4[hyi)q.ƣ0[c)] ,Nuj-}ڻQ|~tV+2/|eRЕTGcGϒb}?6?q7}- EQИ ł} uqoW ~!thҾOvP);kX'.?|& ~N]џIQ QP,|W=Ox!y}֖߉4rw+K5YJ^%2Svw5} q~Eջ7_ϏLn877:ME<)qHnHܚo47鹝*xHS2n\1kߺt'bm4_ZWoOs0~6Go!e5v΋Ə|ÝZn%ifg/˘'&BO.}ӔE 3zK|hκ?kkΘmKc-U/a:OۉH͕j~ظm,.Qux{Bn O"jƤ1)gfo˒u;Eo/zڛڐX?엛AKa= d.L8zlfM{LDL=}&&]'2JkN@JcD+0W URJ_.檤H]\ܵ{g33$ůӂ ԔSj^ >)c|_onom \Lw0}FagSx.a^HilW /UeܳC-U=@/|tJrʣU(|u2/E2XghKpZjEJEO1ќΙ#d XJk9vf<ߎ༘<5c3гL~ל,竕mqL_۲dsȁuP/v~E/rW ̕&*0T`P7!hJi{ekd8fE3 l eGwxN37`UvV$sAI(-LdE )i],#zʐ'1! 54i2y<4t_-aB?w1_?_w1M<&Ez >:y"IM!j}94˔ 7H"*uMEXic}22Ă(:z*ul^%Kԕll-9$^UA{$흈*pf)`]T!ZZ?t/ysٿ֐Q?͞Ʋ~.Vk9y{q7ǣwVjcCBCcЂ9In!D:G2F.iAeGiu]ݍ 9VST&2d{b;e84-: 2!Mnlخ 4GgW=BaC{EZ%W4 {{IsFlG]q<uErfۮ:u ՕT\*/e]]q٤EZ])SWQ]iH]9uU\UBI)+TW9s]|Wcg]]D^wU6l?uUH SW/F]᎟24hՓVN&ҧUWOT\uOPWة?j|vQOs{7n4.ÒN)O KbS4˯H:O߮)Wޠ@]9.]URU=c[c8Ίu3BT>&Pɢn:9/e"AZQHvRX&iOʘXE =pgQ% R#瀙4ܔ_=+0 9wmm 򲛳yHo@|'Q}S$Cz()j$S&9=5=]_Uu%ˡa4/'A{\. n7 `.=ڿܿ[B?KW4%7@~rRX!-$=(C( nAF0pZ]9/ݩ~yjVMwba-s[u&߀cJ]lo}9J*>ysnw[@P݅_|>|KQ\+=Ӽu4ޢ}}|bvy\oaߝ7sܮllZ=巜A~%0zHBb0l0nfuF q8 g0br1՘j㨌=!7YՔV9?.x$,q}4gU,FȰqB;5iݩwtpM?l.?>o~u=LFi9o&4{0 ?=rOn { -[ ͍ahS'|6yøG,YR Yjm9pn!_h•zF6h U]T&U|^o_B ;G6D]5Sjh_UN6q/=$ȟF`fv4Mgh#o8ZK9X488pFz͆56؜ n=t&2ҐWΚ㠢&+60 AXk \D-ښԣExʙNr$ghmM\g ;2d0vv*΁gZ(ous5j1ټ=ymGյ[?ҸZ[_JnHCAF!R) t@򾁑QszLCzvs9gy<ϵ6p©N>H,Q8f-@dM$d)nH7.Kl$h.jwELMdTQZ 8Z֓Z#gt>Z+47(a-8vzQ>uIx^席%n&fxԎ>v(yFH0D4fr9.H-{dhTJ}hF4즡ߘ돧YU% рB4NJ$o\/%2uWd҇GoN9X7^"x}wq⚜7x.׶7Fbilrx**oQKȭBeD19c({0 B]$T3QNN4vޕ)1P8No"3B &%&8P*$5r#c{JkX 9m   gβf|b=7@uYo0>3Glcbb\$%EFgO Cz=DIƓ&B@@XR lfE}) lAs! lr=NJlNhc+|LkӶ[[lGl7PPձ/jvQ#j5ػV<4M]4LBNjZ!F0 $'=8I* IZȨRVQ#քHbPq9[ `5}&96sƧI? }OaKBSM@d|BFǣ3(ۚ~d %DiqSu>WLkYs#."0^"FX,Q2aBJ/Q!2>>0hc 7RrqR*5l \NɃj+1@# d֜IdNΔ|rX "]-Ck<#Gg3bN_ 5y[1pn}o5v`ܾ"Z㡆6sAa"Dj-!)4xf%x.^;`&psDyx` 6UƘD"*D+4rH:44{'M^Ȩڄ'&"lxָޯ>2:fZl]6 m#^JQۖ]m'-Kmxfy]qø7E)rnO`$9^&xd%Z Fh=R*d:a|1גX!-$р7*K|Mki§+i믹ѝ]V <&/W!XVSHMΆyM׮OX@*P2/J B!/*70`{Mc֖ѠyQϻ W'ՂCCv/nfU,ժ\>~P +CuYvApT6/A{Žm|ۼW5_eOVoʩ+1ZN]J[L{iv8Ws/?sǮ7I?  aF0%ּ^?T߿q{Suo;ɻan$Vza!{aX|iQda?ZTM֛5C//ѸYj+5f|ɸ?UfEI,RPەhCEw<y8bJF?gQN~d4 :NLP'vG]٬*~wɥef8Kkr֪wW^,x.1{G}VϷw&BՎP>{Ws^p Բ[5y)Ѱe~PMJp~E^N.k[SĤ٠vyu,x1rqM֧oGzReޕ$ٿB*<"/bfi @|^c-H&٤$mȪi^"VI*:*##_DFs`&x? SKӯt}\q'OѠF͋x3h W͍/ֽI4 ##'8Z0 Q1 qo$14>=ODݹ@)n:[t)}g ;zhqb]NܡS5L\.WW쐨"vٻ {zt;,RS<+򂚣*xmX VSʗF(ls֝v#K1"DA>~lнys3 9}CMi!Z钔N3xa-]wԾ4j`, zX}=i=e%rgK-dmmBJGPDG06'b"BsjY4xY FW40dݸ+!Ʊ#۪'(IzrG"P16'hW&b]^E9Z0~ SWտoVW!o_ ?G4|=I`/#J I:I\\~}_OViM+w$)1 btLQT8<qoUԼ.? P[B+蓫,!~i6>l?^Kޖlߓ𖷻ІGk9D9DTl20N0J5OQ*zmfKչtf9Uu#.qō]a\rvOui[U 4#6dG"WAHݥ9'ৼ4s4;=׏ec=jsZB4"̗}`X҂QHbR&s\f0{!փe!2cJPNO (ɋl)r`kCN%u[ 9~om)W$BoiqIH[|bOY}%;{vCX*u6& nW=K)%`<ިXRp˔-sHRq54\jހsZy axgBȤb*u=,GKc=8Z@{ҥ^L(H>db9p:,?NS4ÄԾn56K ݆Pg35fCtKEOntv ]i;%bZS;G c\MQw41,d 85LG!eQQtrfT$QsCslBIMj ʞ\;Vs'tko*>8@ÁAW_UX7)\fJ@@r;6jt''N@H@1d989l{Jfx~vKic6y- Oa>!tbh,2DzIǟ6"٨h꘸?xr[* Л2A4X-h|܌ppǕCT% VƗ\9eLc:3#sٻ.Ҟ7Z~z` vۄXu4f,hx#]p& ɼup*ۑ6(}C)%U0}?evSHSW iUq7\N'a@tcK_˲8}mnab Ғ{ xIA V?%w]SpBەRW&΃YyM"Cauc*%]"j;au?V]<-d+p=K5TE7ʿU%$Mց^[_^5!wVSf?GaE1o5.ڰU9GEv=ַ w|]+/nov#wd4qٷt af ^Mj̄KOc'oǟzv‡CڏY}~O sI,6,aHcE@EAkEUM* }dp7?6 Ah*> ؙ`- 4{qEHHK>k{.?=rv +WdPRʌc.c .y\KT ϔo(<_?A YB 6EL"D2 m"CzVp zs,,}Գ:im$*'n8pttyT8K*y* dSf3.rÿv_I 肓jn~k.|}Gxߥ{]:]'mŮsq~*EZRr{.Ŝ}K5>`\F-m&rA (=I[.," Oa3tHY^f@Vw:;"z 9ꐄɀ1 8+6Skll,΃9+vg)y(aR!%IJQ R<*cfBfR'ǤLN,lM'y8Y'mA & =R2##P}HK KxBx*b]̤u9}=Qg=]*gꙩ2SVj/vs3ybDUˋmi:x i O|UY ;JVl<ףBO>A=弈*f V"9JAs)CY2j Ӑ 1N"Ha,٨R%0]n-=)8AS{ ~X!q^#lXtO?`ʯo$2N ^"Mޱ6XM)<겵BEm[.^ŏpCt2=ͩG* {z =YOMV9gLE=Y+k[EP֡^NxdDT 8.3;" UK!1}'՗Lt.*);Jzjc4+eDpgg8sLK4DR.pPv !]֥H^gmm2d Ƙr4D MV,3.A6}H+TEq| .FOkV\?W/tk5Bwk\;/k cV65d|ㆉhQ+Dy߯kSŬD1"8ByALs使|Ynˢs LQ/Bۓ9Ċ;XdϼqG BR΃Z`<=}9{:-7zzlT4iͻhFȯy{4 ,P769&䄎N}iJP%޵qdٿ".0qe v@]L0&ik,Q~${IQֳEfݷn{Nuf}6˨ 6]kӣ]oJiYgў^Bp]ch^p)ZբBl T}Kq)DUq^[J6$ͰO~1My}_ *DW-JHN eZX~ʸRQZ"9Q{xW;[qFݍxsx;PѺZcPO^Z zIAUp%&u+2$뒲ѢiA]9/NN>F#Ԛ1$kFH)LE&e`Xh!`L"4=z\)b 3)e%lFZ+ $eʢ>oc7pCFK$3RwoդN-#,CB됔paӐMJ#TAvL0#6|CzrEGcTKk޵3}gqdW{! Y vVD{I<}W\/߲,U6wڲ`<%C(d[5t^Ob67GC}TeDM*1ڜ18'*JIEdJy@7:ƾQ2)0iZ7[ȝ(c} &G1QQ~D[+#:eǢ F` v7 IZ'Y=⢗=)фŠŦ] Y 53;e4X_1i)(JHJEu)I6$2(ˠVA @@)%P[0݊ NVp sVkazpm>jvVq። 4 <(EJ<䪱ɖޔ^4#`~4XaҴ9kQ Ki)n*`*#rŕ]b-`P]-+Ucݔ6p z)ZtUTI@X$A Ȇ2MG`FC)by^P@66B͠j`o]TP? +1d a! N,,pN:L;‡ r ` ʮ"L\Ρ8()$ؙ!Oź(ƃ585T&/ vR A3d*3!(Q(ތ0 !Mڲ$; Ku4|ARPSJNi3+P~j $bɶ 1R)~U*B})QNKG)A@Bi5eUV+ $SN |qP7_&$M'2Й:nH|/nZ1!/EϘ5 (N+S; ')! /.0,rvmlB/Krs-xa%cc;[t9/m@-$t>Fju`qLK!K^J.G[KW40<ǽE n 7zR+ΣN*q>e[}%0ke2xhW(c٣P.i{ځ^Ls  QBwAK\H(#TK]Jo@^,C)9i0 6#-h';ꀇ]xdWXIfRh" AV I+]KdyWN AOʀGd$v(Τ 2S"IE(M6ArQ+bycUjQ`&yH416ȉl\j%?`e!eќs(T#i% Pf9v-R)Ԥފ"jҝZBTVP"t `"Z*A`ȔZL/mDƃumg7qAѬ~^-O iGy`$Ģ` ơiFh%LGs icJ`uӿw "'2^]f6@ U1>ꘄjɾoƗH0DqS4\exZrcZʏH]KF@QR `j?ʭzb-nZ,ζLy tgPzF3/i D/bp-8=yzˆ9SQq&g4$N'J(kc#fw=_U@t'D}rY?N 8O (c'3tAm @b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; l@VB=r!`@7yr; @ D%N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'u@''^q!\+ h~N ӑ@ 亳@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nr}g̓@ % pi@'B ,@ ڰ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; |@W[n\Z/^R~~w,W&ɸP;ȸc\B$q)c6.=]a>qUׇ}+DkzpգTU֭U_'UܾU֋W beۗB8%ݣ՝Nzd[ݢi\Yv xӻ]M+h~|[xiC/oyq1ܚa^|.sX/4L',lLFΒ7nF.Y:ΪZb {xuF%.7 {=`K{=u"A،`ΆcsrK06?'l_0e )yX=oW?dY 9isF*ZұĜB6-cB!1YJ2 _WbX6[fluwt|w=ue^rsu?Z pۃoO,󻿴 Nlg~'yi{(cc%nS/;Kx;6xhՖp2ޏ aIglk;.].f:&L&ޔwͭ:gx8Q< 6Rͪ6,,fMb ~Qqq9c{e}7 oܼ֎Wk bx}ߟvWJ>{ܢFYPF}l.{e+k; tٕS|Ya8٥-{=|6@3׬M_|hх(YuUbL& 5 Lw l- S~k':9>*GH>$(`nm[֖8>ٯ?48.I`hm{@˃O`sZ^\5y>s]wŧvwrVWmN2qW]ī?[1HpHo_///xtc-sۙx?zusA0۸]._ǫ?;p~mBm`Zs\/;# ]:~^jw_NVXOןƫ;nobp'q>ɩ^/jzz7e~>K&Ǚ&(\ ny}vwc.X܏nMF\PӣD(9B2 w'M d.UՃo)/.)29/z]BnE|V􂪾U;Sov?EA_&MáYػu5ֳ_ѾQ1ָz fO>NͬڰsuP//0[k~sW qkuԸďPy^%L1b`sdt=o u'lۻL@@-}2s>i9΄"P6W|_+D@4:K ZZ_uƙd1Ru)bR6&i}E7I . dL@>M1w_\wG߇!Gy9ir38==ysJ梨yW}?#Ԫԭ@e r9QԖB[O:B~AfS:!>mȏWuuf76{>#?i*;5>0^XErOmcL@ UߑG B喍{`!DP).#JA3圕T:y5!24[B>2`EV޿"޿*b=GN?ͥGlqHxaZofEiN| <(?LH(K$  ^RE4\b kBaǚxx2=tBO<;![&2 bP:J% DETVh&\DK&uWyx" 3ߟzi8~axRψKr%;JDnM^Ȩ2B' /쥾ԯ~U-:dG !+CۈzRTڲy@d!}gn/z8~PO8ca0ޕ,nрu%x %{t'`E)Mjp) 1쬍:ĚVZVŭ׍pEƣ&|a^Z-={?x9TR.)Κ>  A+CKL6Q4H~C"iISY*b8mXY%,^^c.=޵T)0-jozi~cmLssGyHvd|Y 9s^lN7BTk|NOϾ3{y(*obKd(+ViSi%$>҆u,LTIOTq? cǻG{uuvމo9>z+y4h{]ajwsrrwŅ5?#@,Gw@J~ZywH٪6@rw0[HddjrDLg"xBK͝_7+U'-ןOfӓOW_O~m&؏'&|"N3U&k"o^]-7AWhMS٬z |W`qᱧ+7+tDPoLv'_yz4MLS 3Eb$&jhZt5{xhe̻N p5|l`^gp`TdWmge-5qێwM` M꡽0„o'lx2uc+J6znzK#6Ji5 5jvD JYPC5ss祈}zXB" 8 gY`nL+BEmRP&EC܃),ARB9@K <ʨAe鼞_Tw1޸&tqٝtyՔ}eYnb ƒǪC%vΚ'/!(Q|ublϓci Z)kFhU{).t#쵼_WQkD*_\e5ᔋZXώs~W/++s\(Ku6&aJ sL7:M )Wo!H9hejG mo¼ #U1h&XW4 4Ej7T27 2;- Ʈ]TͺeEInh!1n) ox6Ql#kJk*N9H)%ۻ7ŷPO_~ $v$Z5)h҆E=Ju;ҷW'SQ}'Z]ߍ:^7n Y?]ѝᛨC/<:˝5n:!kEŸ49cR" lɭ&$I eF;%+.DREp+WxXm/,oW3"C#L *H`4:qp=U xGqf3/-a% zdu)Q3'*H{K7ez3=?}2a,U&^ek!9 FC5wEńZ TVv 3->L>{cuڐh #%csCnNDhKx>̌I@3}(=v9-[WW?T&k9-hO0Dc\xq+bƟ${4'Շ.(9V^*:HH"(T?r׃ah>DVhA8 WFv4JQ@C hnC8$;"GZqN?itÛ (M~V W%opzr)%lInu$0,O:]6\m\OxS]D+֓tWƛ܆sU#pn#)Ob >^f6sO|6.?}ݪbj\_4_ז۵]qTYs8o8F1s&!t5 랆iV= mqOyzǣbs.M7'T笌luq]v+|2'>O|zq2l#Z燣JnT!Os>`4 d? }/}}_ỳ߾? ׻߽qS8"A/&_D/ {Ssv+.y>f1;- Oy; L}AYD7_MW>w$|L^:lvҹ9 iGh}*Q oTx-,q$NyU/QsDU"l*E'F[YLLFeb$&Ņ& :xQT%RRƹDB$)]bUI$%xr3eZ!UtEblW!D;:^:oi㍷" kr/uTUQ}”6 ʍqe%SBv%UWTItێS^^2 hh9&$A%äCm19*o$Rʢު09%j61p\fNm usPG /IqaRL|I=K(džY2;Bd@son("6+k29pTc\XFz P D#ؘ[-,.kC NY[~jk?mMX<~E2_MA'$ẃtr~2/5tk)˖sæ |}F#CF]Z"PRCD Ȩ}a6k'ZXP(sXT^ڜ\AY`V-sP&b`e[mJqN5H9C9<)PдHc\G6B!т݃˚B}:;9Dܵ iS±@4 M)d%jW2)[c<1RmX*d| 84 2Os) UN5_:95Ol$$g9̘b|k'} FMڑz9(6d OCvȭ=j&Tkw|k(2Q(F<U*ɂB]$T:gZIGQe hfƽ+)iq8AуC2(6HIXfUR*MR9ۑR qƎXia, ތ~F|v5Nӛoy/Ot;Glsbb\$%E0F-'R2Sz=5DIƓ&B؀PeSm+5!ҋR س9,9hɕ;)F :mG$& W*P:,9ۍGa<.6+ڻvQk`m@E`vN@¼M&(`F[ID:$ށ-d @+kB$^+TcVG ITCX~Ѩ< b1y=">'"rږVhIt:\`[G/3(ce[(-6RpjҔ-4UTT; ˄VSk#g;"~n|ŸdW\qQڀb QHSS9FJ|)=bi$CW %{\. ]1b [?f:n<ޕ6neٿBt@a$@74:0X)[r$ٮ}.-DY )$I%Z#)y5H^¨ʊ׼ƟnooHylf14pvϵŸoϸWBۂs gUn],rF0(b`B=ưt}nx f3 ˩V!T܆) \c#]"no?R޺\n֢\n#]ᡰ:h8Xzк]0S *e8c. 6i/M!TR% "c. JbiYu`KѦSS#6eV%vx}~ocWޚV0R$,O7 YɽJΌ(L |:ZiIQ'~R<7'`dE J8E8MT-H6y77X>a$Q!lg0mA>">Ebؗ۲]!#[@;Djk7SdDxeAzL01TG0-s$茍ҴWgKaDZ43Y GZKEẚ}J(l,&6j0 X.8RdL8٧S{˱sy>3&A)f2Ygo4ap2|uH-]T$ r,$`}|JXaxwKB).SdLey=5Lrsr~O`0X~ؗuF 1x+׏xpW'j|_̯lAŇ#bCUBm]OڠJu$JYC ~e))I >OMs]-,%tzG "}Im[zG)X(bAdʒNԹ!h429sA4BD׃N m墐`a:I &4F*wKCYiPVخzu$[B. ) wTi<S2xn<\@aSz(֋ۏ#.92ܤsՃE} Л{X>U'?lZeuGz(`OǠE}@?(#Jo~-U(' ]^z'7ƲT8UJ;_tS%TIFsWs%Mj3 1l|x?+Z9ޝfZMYhiM xzxtzSp6xM-v4'7sů2ojD[q4ՂW)/^MmBM5ԧ qU#Vi{;iRU_4skc$%s^\DGsì#EYMyq[ܶhQaU*c¸Fڈ=FjWS61URpBb<l'ڶ(aZ2. 1D/H@:xVb %Rhss& <brCR 09I O#K2 t Dcp= m'~XxBxT^9[\;BXqXWзO4,"1#JG A9Q%0ٔ8 6.2߫ͨ oIk7Av(_PJ9W h3'n{w1'j>eÅOpfgA0:Ӣx#@N_@揮 y眡:rl DyH+{[uG,'RkT+sp>D3XQs03%S饴λf0@]w! Y\.*(93e`Gq[(R1Iql-MHЏaQjI sAD8ND 퉶D'o;loи-Ѻ{HF##zZv}oN>uZnW#mF~ʼnEІ߮G+NTu4MKD3N)Ouo) GW #۔;pVlKޞٍ+`*p>)1Fi/߿2zXsu`?RE+M&SVL QQO,η(˩~ufZsQς٥OuN |I^L˛^i;Pf>!K?G(VHA Q&srE ۆ؆GNowy^/Y JD't+z:A2\ic;DW(Bt%7%=] ]YAR(fw ]!ZNWW{8]KωɣvjzdWvhő|Wۡ-s-JtS+L ]!\BWֈ=]]1ME[%ͥz=m弌|̾nA^g,g"#}Ix-w|& 'K#5,724g,#RKn5!ZqKwOc! on564%ptnPw5_N#0>J$>W9-v J18 ڙez7.2F=r_3}5s,q&rUo}3:2Rƻ+<\en%6irܗe˿YC6,fri1myKuQ'9<_K[Ls,t=_yli (Ph-֥\Zr91]rqrgٵ +,<h$xHt m9in[H)޿İzZӴt JN ,`NC\"-)*Ζk hdS!Uv**/\~y+ҾN~`TgB ] BҶ= Qjч~`t\l,i^Dfg/qd>dA5ltb6耺}?k#vĻe2/4D"0 Ϝ1\JaSm#!5o[۬4 B+=ໜ( hdota9:& sxŅk9U%j_swzY,r-{EB"U]H`llhglk;hm(yr)!e+D+T Qj ҕVS!B;5 L Dk[ (o]{:RP!B;tpmg 2FNWrӡ+]]!`k;CWWμwGL%+D"]e4+lIw+KeW RNEty|]!Z+Ss9њkW6qj;HV()iY+]v]zCt-c++LW v)cZ`MEmI%ST[T傷74i,U"\jvFF4}4͹c%ƭ:mj&uO@9HjJZvߦFqA;$U 3RMvhm /UT=A*,v3tp ]!Zz#PJjz:A% vsC+[]!J)ҕ¸.iW%ԕ ֶMmˑEJ[+M1nw!ZCNWҊN,aSv|(ZC[]!Jzz5tek.='Dڷđj;ZCkTi+mav =]Xۂ3tptc%lڞNe/q+c3tpEg vNcMI'2,4K.Ω nʤTi%#;Cג44(iisI$L"[3;tVW4j*nmo#7ǿwMw 6y;cMd'ɞWlɶflnOj:8ug,ԩl9˯S5c5pOAYRNZnrw)j w)*ь.dK]ӈ8 W+jZH-l" CĕˬW$z£\ ԂN};8n*6 =GD`˪ jUQ ljUW+|{9!ˏ2o% w.:oZdU`m:5LZd|.*>@L Vg閱msy(Oح='?OTn*U`4$uh^)*Qd"Ԅ+lWEWEVCUQ8qU&OܓJԈĕ("ذjpU䢨W4}M%CJnW$#WEV ~⾨TrJU\` ՂVqUTcW^1ŘٿuI0bϸ&Wy쪛Zā]#[\ uE*m=J2jUQ+p**qu!o5(n1aYHRhwEqw7JViLjtQk1]Tmӯi! jh1e N[`X-1X!Iҩ8bY9BJhз[[m]CiˋX}|'/?hzL66P߯]U8~?[V'' DM't䞡JFp(]N6#Y:##񍁻]ҽyyl+Ju^r ?Sn6Wniu F~/gks|o7|au(mS2/WK{7os}V`~7~MJa1m'&a`P[)A[Iz+@`NY# l11 V@KvszFm&tXm\K䖩x.+y*/OP D>e61x> [F0dȨUy唲/~?[۪_櫋r>YQU|\Wk+f~W/ϩ/`Y=ZX.ʳY[ޤEuϲ}JPC76Nb9J߾\~ut~sǔe /7{M_ݵQ-%xb\%vQ6t "~g.R/Ь [ 7p&3 zsA6>؀"ۤctf1N8;{S34~qgKK :Y>GR um2WsԤRҊ]% ^rK8d5 {]&[Q86thorP>٦6߶|Ozl)/ϝ(X^ T!ZzHQkUBk2ns* iD~;%3d(wo19R#1 @\{1 Cѝ}yv~Ҵh-׎$5Pz=׫'RYc/Oq\zu!(W{ .pd08%L3<{aVڸQqI2Y\D2ĚXА&Ñ}|*{N79uWiT8"V?KD1q$kr_AXrtF$MDkKrdE y@W"Z䌕-KNJ n6\D#IqGP&47"aI4.C.g"&q"~:X4xam: 1qOяdY{!6>.?io<*HLpM2E6JAnpA6{v7obq:\̦5Ly~ʍlvi]oyE[꽚AA"g=N'%:8kU Z2YHQHv` SB)_b`) iɑ}}>fSýܰM^:m#^oܷ{k2O?W2h.2bND0  `yJ $FWȿ`$‹M50a` -L9;^)mr0\P<Nt`FfCҤ"HH$@+<7ESA@vЛ'l"LH @W.Ez$K>>Tg%SkAwɁQKDyy')9V>ca" 8Zw4MmGC1(۳ҍwٓKt7 ,״1d2/^肷AeOċ3b/ iH.Iɖz8$E ъ,3=5}TWUW*jrɵ8w) ^)0r"9i5U zshziFd gELta<R宔f]Ulj{g`Qf7`(na@ڮ,k={u٣<[ˀx};O''UׯNP90|;Nn?ye}}ތ?i+\~CףA;/\k|;J|q}Gۻp2걼XsŘ14ɪϫ]O=] ǻMސ}frp8;P1v$~3mM"9NjL }~5M'3T*k*4_9v(6ls޴kAU=TՑ T<7B*2~[՘vI9MToT꾹YPW U{k^SmX4]-6U;'Q7m͒=`f^s87蘥딺piH23ABҊ}-O])qY Gπt7 0!=')0U:|ԥ䰤`bgRP+L=0V^.~'az5JukscW]v'*(C#EH EHWԙe4aqTBY<#I< x{Qy"eߘɻ]уK?A NNCX&qx}wV=S[>J2(La] DJ{7b[9W_'ePU-\(gB6½U+*-\`6f7ëc6/J0BQtVVRrUgHSr!̙ 4LQi CVP&tX'jMNd%gz)eKOnn5CyC<uɝ`sqɓ'`J=nhc{L,c05%a4UTsc%bD;>o7R?@ӭhGhm]}Z#,~K% HGxnQ9^ Y{﵋^Jy?>umvo=IS-ͺf[cFwS n 9M!`$VFJ%"DcA,sKQaVDX B1#ap) A)#tViǬQFYJy>$,H8z|P̭0㦝ٜ.Qh'ë D}{3qs"\rn 89:4 B 6YaA1RpZLjES#\1`R PTUA 2lXsCss..> 6G*#R˟FTd 3âҌ;$2bXAPVdba|'z,c"CPh_{n3-тqAHY`+f FYM] (39ֺ?lV[VeL,0(2IØ#cT y4BT }$ Xj6Dja!vJvIL9~QFG[cFA3asB"1-T{(#0~t VHiC2ͤЮͧ:|3L UYCg&u採q)d=n$fN?.H~ YeY 'X$`~UVÙ}ieAy"$|R8fƎE dlCP%9RP;2IQ뼺WU`VE-6Ш\:\oz ,y`Xǡ)fzSa:n)HQWϓh1Nz*My5B8ߎ8vEr8^P&Le Tɴ_ տ;oՅy.1"ŗΈ4{}(۾3d৫iau>7~b5ıll-]65CaevŇMѓ6{麗cUַZdSM ܙv$CFR?cl6ΆܭT5 F0|~{}޿9sLO߽y ;ΟpD$&"ۑ;M{4jګmѴn.GWnhsӬPf{n ҳ짫ϯĥX8H-&hDb e՛dTQ%o' Q,@Lf|Y>(^"ᾨ{/xc21Ľt$Wg#i "j8zIXy##ЯIi$ +K wJzj(ݘ#nlY^^/]28QEgZ EyPN{"gZk$4b &vQRs8iLSa]͉ºi3kVN?t9HyAUJx- =E(T*W3|nU ݆,G@C٧0ȦP.톟PHGY4tr]i˱eq04h_l{{"~?[q g 5' :FnDYa>~jAX #SjHN֣|ܼY ]ޑSc#d .z]P1I3礴4]"TP˕vRb\d&KD!e!x0k5f,$﵌FM&Es+%e[kr-ȴh%5єE.@VDzQ$ ! Y㌊c {(?^޻/ zɠg{7ݜef}؞^fgR`'\MǓ70}} ͩ҅ pQHzsfUp2a=ᅢ]^/ *VGm ŒNYVuK}+Co|01 S s>>ɝ!Vߪ~-D3CB('9LK( ?iћzI+FAF oUw$)PnQ30ZBI:JG ـyy@BHrrWӅfTힴ 'U}Lx'A0FE Bk8,ы&Og3 v S6yp-*܀\=`OydC0emYoҍxKD%GG&5Q7?*Xnw1ilkT:/KucoAطծ;5n7y0W݌1~iCk▭nɾs‹^1Eif4}է7t0`i{$KD`̥H*'q ӄu9FG78`䴁{C, 0:/GWX,gRIR-yk)>g\mrqd6(",;x?OsqTwhyѤN)%!Jq&3> fY!v6xAh6F:1da`k1gʑ_}ڒ2)9b#6v&ev76M =~` _nst%f6eXLV9K7lbKK0%/0.AДBc@L%c[UCL1"9t$LIB>Kh=,}bp zq%Sm[iBNFk&3YQK*H8!TEN6dpO}Ƿs]\phM2⒓J RaQ>Ru c >]]`}~vvw=- v'g+q_[9,=œ!Yѵ5ppڵ\[3ВkkJkkQqQPO/7 }2xt+ʡėc)xI";Ėlb-tG-EFKOʚsTӨB8uaٽ4ҥB1jYJ.̾8S,0Jw9+sI}ыy؃{ +.r~|_e*dM*?#u"=YOicUmIK!k:iybb4LM @%mJ;-I}h,M١6iNJ& &)3/a8eZn[ܥCsO篏_"D8.'xd|K7];a"i.T)tG! #ȅ&upu<"$i0Tt8a [յ)Tk>l^}S;:uڝی;Y>ʷxW|k0g'#ͷHSq07>٩Y#mN8u]TM~Xgr&Ԅ lhMy!TSr\trpi)Ơe<~%|NtXS|ևztr^ z$m+ڸNe5ֲqhҮo(]8lÍ{A"9\k6Ȯ@tts&u5Yͱ"~G=0;㟝tZ tg3ۡt;V|+>SU+ pqkֹ]@W{HWbo[+ݴ jovc_n+2toV {dW$X_0;VkX?&"vë׆@K4=Pzs=i"؞:+^6W~tNk>pcw:oYceԢU^=n rt_m6&*~ֺx?Ty3EUe8gp^5YuYuU0rTk+r۵@̮Pꝧ롫7:+NW%jJ WDWp+f5fpp?Pj*X "Lv=t5j a׮aj*kGפ3ՀYho+qӕƑWzv Pxfs/oڷBf-CoY]p|vu\6kVܮ@r=+G5[Y ] .ZNW?~ҕwqޡd+xۋ`e,͟9y|`)XjXza遖yYz|`=di"~dS2Z Oo/O4Qzµ0s޼;B5!לN/ޗ[Zpwsdw947n ŪޖLN\hq4 ?a3 oS{1"FSd& }~k&a8O)Bp3ʏ%;s;bV]\]6{tw{]ƶ8gVDW-EW.Zj_(C8(&%s_ZCRָlpUBW@ 1t5P::U0nEt5] ] BWi]j*vEtjy5t5j@ej6OF?}𷏁 (T㓓b0c /P ne4]ҙ-_?OKUZˣFwo6涒n;H'۳U(ǿwJyݾ^c5S9~̧G|r )pkA<}sEM鹒W^Ǖ:OKbo&-q](G(ތԗj[o Y grTT1/@$Q~cg't<!<-j(}ey93y i8~Ql}_PmD@.Tg|z/q{r^^ؒCID·ެ겕Yf3-⻄dD㓯]_o7~ƄEEQ6Lhb:> \NzŘh-jrL\@Yxb3l3\OPFSfZtMȤjHkʹRu 宦J*OWGYWUޚ/wB*wR+k" LĘhcUX&zR7j[-hsbtE1X˒h"rQ\ #ZfQvKs!}NfѢkHm 7oޝBRRޗvo5uzPR)G,fLsET#b!U<&3;' }\\5CѱŴ{)Ѫ2 h`n^>LmU4쌱ZEỒ̆UDqٜ; c0 K@cVyJĮUf` بQ bh"?ߤI*~MАuZڋXYnj(`ɘ3">Q-,;fUjZ,ZW-x4lhN1.-A7@x11,ɇ%Zѻ)J"vM8}I!a##;$v3FXK4% v+%#Bb !&9:U!j4gb>Yh^,T 6US}t#h؞:iܔ5 : @YklwH! S]kCvvilI|S(-VL$X֌b ^TTlPtA[T@kahouh;68w aD9qnC!Vd+a$<-!kBhcnuCQJx%Yeе<]0U5MxZ( W2 ̂5^YA6l̡  "֎]SPPU@pɕ2. [|龒puT`Լe؍,6!Y6 x%۸$X( Źfl2J*UH\B2k)@@ZVSJ+;mefH57JA 6[@0vI Q""*gV:8Slu]o)t,,xD`&b10mSGB`@y\E\k;gz*\6ud& paVZ h\CMEwfo %RI9n,\ BAyўe (EzeH_uv63 :WJ2R Ʀe6%EFjTIh`[ьOU˙(Y[f%GB[l,@%'1!߮iE*rVczhMHFФy!D}| XBA{ۋqicŬsDr"h(pljʐvNvV'D̿kv&1wo45t~<.H7^VtZ01S/W L5Xh"X 3 o b *4FGe:U(rVHVp2:A " {jN$\`2*#Y!;n 32]:J/c&Țݗ]d 6[*x $nGfhGmJrA#;Kp-"V1iHBzx jw,_T31b,V{fσo DƢ td͂7ct,C,J355Ō,%&H?<(B;xWZu+`<\XTm7>fmqt3ve'JCi?@+`X!]gѝmhdYPYICxam@\+wǣjx/G> 22ߢvAVCH7>F\̌CGjh$σU;m+Ù$? WA{EwH7߽$lFPiQ6 %dJHu7tk*nx~ )vu]_c (.7i ]Z_&ͯkE(l{}7:Fo:1֛-vfxD0W^$ 1/OБ'޶ǫw /ߝJK%l޵murmz-=IߩVx]WXQ C@C'>>k5i荋D8Y8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@u @ɀR|@kT\$Z52H=/ tD'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q 3@~OW+<&Pp@Y֤a@yʢLk\N tV@z,'kO O' vNpj $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8&\9z>N uiBPV@St`'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qh:N[Awk}0?gՔZmo/7~]m/& d\B7:6%Mn%1.MAcC`$W:XbE6?vB^jrEۂgً`D:sqG)P.0Em/] 1uPvl'/~|};ԮPjaŹl 7߾i ScnriC7e)˜Sikb{|꺽Tgr{7߬i ⬙-\tE3ѧuzyl͓WmDI^6 /c?Mwg5]jo7o_ WrirT|C}E_3qJe UeUIj _hw2fUvͬJ|i*RF%gUSd$W<+lh;R"JE&(W.Fj͂rda#WDk(DWS++ֲ+u\m{Lr~ri vEѺ߸' r5E))Vr6rE6r+ _"ʤD {v}++_fه-pZ%q|^]Lt\+Ӛjurm:@N|Of}^#W/*[jp|ټ/4z2z2zqVT ![[RI@-mNmK|2ej*U J-= 2jD0> %w!4q2J2\d$W["zhL-QYj\yc!&Frn'67v\fQzD&(WP=FrLnN\ iFRz5IVi*v1&E֙Q/E#W Si9l\1EAe7#WqϮs,`C?* D΢ z~QڑrElAtg#W;ttE\5"W+cC=,|ZTAFX87W]q^vC4?%ӄ&ڡ2y牋LOGЫR:-qݱsx)¦@K0uAd4"ͬJ.pU6~VEʠ% yI8Y!ܡkgц]SrEڊ\MP} #W"ڤ.WHU\yz*Wܸ'\>&5d(DWS#HY"f'Di*+~}+ lA j"RMRRsōqwyMtEލ]2:/FҞ]*8ŝp8ʣjQՓJrD:DH lpՃ B072\MQMWmdZ;PinaSTZ2ڍ dGx(F\&Z.Dٻ$2=)˧BDVt/y噴kXèTBƍP8ud2/ɾLd1PNr6 ѺѯeoAr.U\psqAk.rEvrE2IrD Jlp!p+zrpUVjr h2c+IjrRz܇f "IE\!3Njr?qrJZ.rEn Qz#rmܧA͕,U&#& @ZqyNʛc$WDW,-pB0zR"W#W,#B`|p=p+]eft*Z)֩j>kW+>vBJ$\%uԎz6rE'D%rJz@4Si~]Y&R+ڕΐ+-ru߮`V䊀!+u\mc+^jrep=d|Z|¥Lysьd#;]mL#%()ʴz[ R "L^üLJ|]n+V^VuRcŠ^N[%ld8(=A]ŧW}56"cSAо[gVjzjZR܃{\Uݜy4Gn=ߞ]IoWŷ_7߯V ZW]7,/=}n\Lavͯ܌%|_Zjt%>ԡCBfO }~CxFi3>i3F6Hf4F t\+lp-U>c rxr$)"`F[]׳+MvrS*\MP n|nd3i IiD(W8<#B`"WDF eR"WH}8X'FEv'Qz\%\oqYP #5~{[_lZ9Knݽt48Cۓ7Gghnqg=g;Mh>atϼ٭tt#o'ٿn@-׶2{27f @85ʲOF6=Z#J%Zb5gכ *`>8]-7 [x66qQڑ 2"Wzl҆\N-(c+۶SG+:nTu m*̭3K;MTjF2862M6ria2VLOQuj̞K]c\$q~4Z0`oY^L|ݬؙSΪ7O& k5%i%NS=Gxnءz TaQJF6ɩ "\fh?.WH\MP\r+GUܞ#\ \h\eote$W="ܔ&c.WDi*$5+|nSnR\ ierEE&(WIie#B`%WkZGF "%$WvϮs ᷖ;vᦁ*V({G!W6C}H8`#W *֮(D&(WFHi>rEMtE0(DW+04*F/^*PkgNm7L#c89Lзh4QJT9IK_0YJ):it3lۇO !k0u2Ly+fȎ#MEnt-ׅ/4>x%GT~|SmWWw4Q 1*{AsC|sXI{؍O݇~>EmtCTnvr ]xכ7ɏ!85's6v}}o0"7zkAjz^u"p9Ow!vsw ۤccz;X6ǧzG :7BpyYi˦}d])\¿o^UVz.Ŷn*7n*ӶxMV6ٔb=q?uq#Y `(" 8gvH6,جaVFQ3Ȼj)Ye"[dYUUuuuzwp+ӔKZ4R8(%iu׾1y Gɓ峏ȦP.!0od0ȕg* `{_P>ן̴^aqVg,ѯ&Z9c[Jry?B[yɚY*m/Ӆfzڀ:_z&ub8tr5[svfP;\RkHK>>{ wKEi-߿]fR`Yi4Wgx=+p'10AǏ^F޼(dX7]yRݷ!2*NeCd߄8GWӯ1*SEa٬:Lg8ĺrYCгބiv >i1*xα1gF*'=},C=C=_M/hD8.b!A雜 r^cÚqv2aP [!{P8#;зu=爭Enpqd@Z( -p rdZ-QQ晠&$Qh=eR1G EέRȰ, Ƹ#aQ)"V"D4V`54tXAu[]u[:>s*N. 'f"]lPw0`ݖr(AnZL ֤a"z/Z@+ ublb 7PVoE3y|!3ALExcRWst] +|.M= S\2rq:T( T2+LH@N'GJaН+݉=Нh)XZ] _%iUư !^8`.crY:K=h(#Q&R/5eDDL ` Xy$j#ȹVzbǓѧ5?'04OZjR:!p`7q<0k_/.(^ɱ.L50 Patf`ۆ S(FpF؋FH\Q13(DPF-%,BF'D\['KVEg׉ v5<0yz% ywqG空H;jjof Ep7xi'V< ?yk{GKZ xq bp a'ISP;FO7g4^iU=G<8C.DZ(2pA9 nH 0x/;-;B<Wv`= R5Njzl&\澣ւFHIZ*AHP 3uEXEn? Gԃ@QזO L";K]($uUa'WN I,}N}@R㍳hAiN3NB`=bxe6XY_yWnO?iϹwkc{39g *%7.WŃ;Bkg-Vk|mǞ2N>8\1f YRy]ǻgmѐ&pئ)/~\"뼿ۓK4Sހ^z2U1{)u.LBmr6Cs7PT̀{kbEyOe9'Z{& f&.u?{/1S o]c"Cs _7XU`ֽ0]x4]=6_fvLknFbTc͝ƆK+|H23jyā^V%H:Xw9͎;),3p0Q1N9S J‹B90:Qz]<0?RuqwK.n[Oqv{"N:ą9y?CvHBsKrN*R9j~Y9g_9g,&[ۜrLy{pOVD)[ً"k%0F:t%bNo6FobYnyCkY\r#,h0LPjVsj$B+&L**H@*(c!=2S14uH2m,#)`F;̰4N+ i!ʓH&_?*=p0sTn_[XaKh q+D6uVS1C, rNGlMTSV2 G #cT y4BX#E|$ 5"unip+#^'m0Ɔi j`\GTx և[η" 5v%k>j96.| UYy s'0'dTyJ( BIj]va oCj?ʖXǢ8bZ‡?8Wy?(bXPC ? @a$OڎV`$ i$1KA)g2\M셡{J`pUtb )K//'cD;ZVyC1Mc,NcgFc:nHUzm_Ͽ5x'OBHwSvҞ۪49Nq.9}0}eiԧt6+\krQ<]u #1Ӡllۆs$+FŏWo1P|jic[lk7Cuk3MfyBC1h;bx3`sLJVlk퐖ɩ2>M\ (fImF3߭T *ۙ/7|m|߿0Qgowa 48zm$Hh~~ =;M͏hw47o*EӺ^~.] 6ف[LQzxu.ŸG]W+ivJu3?1t UTT}*-K_END#Zx}UG[[WYF^m$pIdPq0K+ @& YPXYbSSC)G驝 sW!9|{(:CG LH,NG̓ri`kb2@ɠJ^JVϠÓ'3|t8dm.ӁՆ\9` aXmQG;wQ]{rۙe\torN)dou>|`w썉 &Ӎx~* h] aƌ(T)H<bͺBaj0JE)nk,Ej8;Zо;͑sjlKRhlWvw!U{wSͧkb_:1#0x*1:L?&ÜrbP7~K8֤ijk6}[R+\*DX`,#D+mָ-Sդw3gYSyWW7u0굓ןT:Сt%#A{Z X, 2+g'^.('Њ)(AiG#Ƣ*'Cr0ph-lm,Ej\+JR0&1!cH9SJUerчh|g8͜= ʭ!{JrRxz_l1~C(ZHV,HC4Y<:G+HTw(IɎqJPLA{JVw} 8I0YD>CPzOw2gq;[`iY4iN^N< !kfvc]{˝ZF1 25*UYx5Fr^ :r&x59*;1Ԝk K\nC%aBr%qF.kR6ioYk\%Han80Z9 _]}%}! yw/otkͿ`ۋخ=*XAi9ۑޤIygl *7 ^ oC+h$Hش>$Nv  WRU {,9#vOJ̩DnШmFC՗^,g)McA8St!FUmRAUjBhw)  C RE'[Ě\TL^ciluc. B,{ w3gw<C##eD$۱-b6f^J5L(JlI{*ȈJ%Fb6;Vڸ8bęcB(xli!zst3gwDT䣎ýsMm]s\V~mf0>Y;zJmɷ.)TL 8#UV_#.N%v[{?aWaF)Oy Qq!yHӏA# p#jftC6^&dCqE:]G;SgfB,RF+(p@#52:N99pKhT̠UjL8{d5aɔH4QTVѽ2gM{뫋m2ro\q/aFBE֐u8$FR!8Agr Jə@(dJ߹DVgK,cZʨ\-"4$n5{)tʧXZLM85͂h"@?XB ):p$AgWZy_MUKe\ԑkVNLIpQCN^ "~ آI$Gw|zOq5hk4${5B|/->O˕bĝٽ:7UZn=T#()c)f@ Bg"ysj;p ,no>}r_IyFxl7+ 88)^*lU۰nuդO쑰1]}c@W/:T.n亖o%2nO˛<9?.n/76IN֘|5ol0!/<\9Ko/:ojA=gwZ'iT:C@*oT2}q,bC J5fR=Ā8pD+ɸ%4Me@[hTsDW1fĬ,CIJ)CGzRNwCJ> dBu1&dtPJWHTXӑ=geɚ"oJI]2ˈxsI:'|9Y.>,!AŧrAmCKN?~rq%7Hy&XR-|93;ʊϱɶ=x,ݿ-wgz@!kj0|0o_#]ho8RYZ } ݁,8b4ߎ#EV *o!FPE`{}yko<>?mg8D ~Ŷ|!x_;qInC^=ooȭ-u=[Y|*Fr W߮~{y;/;nfPyn7+$`1v0+Îp(1Ur"Yg2;C@!ɝ" "AT9$ д"2IW \c7)*sPUCS e1cnKIc!_u4eA~{Ig`Ш z{>.OI~:Ry:T{$Ns;SɃ5Yhu|cP7y Y@C#HFGfI ̊>(bɁQِw`mscϫh:L$F@,ic)GP*ْ!*SSUJR8/WElG@3|%,*}* J$4KRȱbnAzlYsg")\l-؂6[!Gb`'I8Hɲ."kNЙk1h(2ତ86DȾO(P)V9 mr|G^7a@bK]tbK}F? =37Ԙ fu8<%O-8ԝcF3Ƀ.E@ HŖ;w 93myݹ@:x(pS S*֪w+sTq$K]=wpJijX0jC0hƬ|R,ugٳ.aC_ɊTOPE,ުldis"ӚT U)\(]Tnv.,7_SuZS+^rG?zk~$K[8EWPUɆܩ \MCQx$RRaC?ɆWw0ylyk ^K$%}!I٦4dNi4J5RܷbHC%eύFR#)ESRF`/+Vfn@bl=(/U,PK=? n|F-)ho9Ս:$$']n8t^PS \P09iadEFQF цOa# :,f+1bkmĹFq T!D4qm W%XJNTOt˿9[qjJP9F_~3.$)"ur,V;|?\Wj/wڽ_:r gU"ȩ5A JY+kV+QTZ]؜lӌL&^ckZ6;{1Pn|;cӎAw'Ԏ'ӎi=vXގiV1C`НR* WܠN%\ZfQsz3 s_fp0GW/27\Z28eVBV+աvd +1X Wb.Zu*Q,!L=\igږ㸵L5 NS9+CJj4d+){fCReNcxt/{9++ 3n΄F?<݄D=x;suyu O6sqzмP}=2Voz}hAo.18dԑr&\}-ψXL;MwB@k;MwFhҴ16@?^,WVþg%qCY9BGﳬp,"8?i|Ec1Pw&q(U߯_϶멆c,>\\:~ Dş4t+ oˣ67=x<RKUhQʡƪ xNs5|L@g3WѲw(9+: W\?NXG@WuDW|nw2NWezt fDWΑ2<pEBW{JR@W<^%;#=͆:BW@KzuwnU9 Γr+;;@뿵~>];Еت0#] ]u΅齧ԁ^]94iyT森:ˣ:]JMcgCWKo {DJk@|wfhx>]u@Wz{iwlj7/;zn(m;Е=S4H+9rس'CgsiE|hp̆֙i t@n}//۝/ #J11o?,`&Ѵxo\޽{ B}Dnj}ꑵY W~}QwIgdCzryӲ6"_|ܾ+uZ_/ޣqq9\]_"X:k?o{iCf"NV׊1\~ĕxק m`H)}ޏ{&ovr%ź5 x3Ry a|nq'^mܣ>n Ӣo|B|^kA}89ECr>xUR?='>R`v>j)Ԩ71Z|d)QD6R2f}$)Ɵwk|W'[ >d}C{Y:Bq?}?m hr89+nR-o7;BrI(*1Lb*X}gt}_ :Q%$GBK_kE)Bܜ*6^cG~IA#;iAvPc?BDJ ג='=)ѺZcPOBdR@̕4QkdZR;%YE.]9~<DrIXD)E7LbTA*qJ-p>"P 1MkKfh&IYVgΪrQYW=?8{}=SH8 I+EC-hC3odVA4P:jJa0 &?YShdXZ\dgf+Ya^Qpqɸg@xhD D>2&+|̤eKd<%Cb}Jߝ2X QԳ*S55ks"ԗT`U3f-HJQ+3 <}|IC`k BDp5H>"QQ5~D_k7ҲchC(3L Vٔj() AZ!!s`jKi(A|"KfҺMƳ*.zC֪)[Aof\ ?h%gE2 VA&YR4JFuTX RQ]J0IP/ E2U0c` ,ڢ  *Uh sVAk!O=4 /5h;kA7ix(Qj<ĪdKJ\̃d.OrcȭL9E"$l UPhc]VП ^6g+p?e4mZTRZJ?Y9TuFdPK 2 8swSm. pOI1TFbbXfC PQ(Xq$S, |=Aq**L)'L|1yx,8(&`xHoU%rz&* kNCd\aS`h 1) Ȅ@12K*SB>7[S : 9 & | *drI!0΄="\+85T&YlY{s6ݙD)(J8E5n,\ B=ʁ 2|QT FV&b.1QvO J2HO5ZB3ȼT} _ʐ_N3@yT" )FTYm@ $Dݷ;29xZ{oVM8.ds|/W6hg'cEeթܥ҉ $}&"vlۙxaˏWrӚC8Kܪflzdw m|k!0!ƫAy@9>m9@G>+ $]4 -*d`1uL΢&q`xR qa-Ԓ"4E"eLԴ¨:#`a4׊.e1|OyL\t辤#$h5w< om UtX:7PZUu U䝨ځne53D!v "}ӭ|uI_GUEq>e-`I'd_b=^EDF$S6qxC*r]/oݒik ) zc L=`ZӤUBm.F8ZN&Gv, d z1^ |J] jF(K&k\K,QSQ>&վu j %fle'r(@J?zX3,vMkd gcTʥp#ފ"i`!-Mjw+$aQ&0/Vp5 *T1$ %4ayN X,.mګ\۝I* .] ݤ6@Ger a#k0vPprY:Z-Z[0PZth H&rѐIeFhQB!)a!/Qk H!Ts ]a72{_ L3&%\U&RA4S!e 4|ߟ@Bz{ \ߴUall 'z+FĥR 4b!5r?$wQ^0 b^sLexSef$dn"1d9 T,~ x)]ڨsacP՛iT ml5&#sq@ <֤U TR%(I~Ϥ:!&Cp FIBlO?M{BvʷҴO~MKp'^qi` &X)j; ZfS`-$4E0zʨ}!)A( @.hRkzPRₑd 74 8X'ދ4fc)pUj.CD!}N1)5P$pe. 踒#P5.K.?"t-+{[FHz' ՠ, 8kkqz)jmX*. I8xE~$ vgड़e@2CVӐ8Jk믿n_`)?%z<珗 Nj"˾ޟXttpR`@ozܜxW8닫oI#t s/+ŧ .CAy5u`9Z̡©/M8O^_NUV ];8)أCzC63 RiU}/S7a4?/VkȗC>u![A+L*B֗Y_Kc\@xDAsV%|\aߥFol "Ccm\-3\A3+s #_vvF);wLռ,ȔdUVYuq {eCLG&ߡ 9njgf5WMJB2=JvwoqFɄb.˅I%vAZ}Bc\D }:#x=Kfie#HI@M4= ]:#À+Y2 g_Sjre\!KwT i]RQjr$V)MrRHs.ޕR<,PI݇(W^ZuBr&#WkB2@) a2ݘ)Yr \xrF+z0Jȕ#ks k\!1:\!$W+Yq D~D cǤ*lg=KHeQi5"FZcWi$JKlgp3Oe@9~eb4LE ݏ2kJ^&qf7,Rʽ|^5`MUu %a `!iU7iUEG!J,A*eL"WH+ErZ\ Pa d i]/€R2Er5@2hOI& "I=z\jre3<`XdJWT i-]j8rIDcK'D\\wBJA/(WT0O(D\\-g#Տ#WYqm `c^/%f0V QzW>@<]#*!B' qOEs\!4$W+t=o' W!hŵtؔdZ8.HٙOpb塧+$#W|0ھ2Aiʕa *\!^Yʕe +N pJ&DZ'b+4jr5N'$Wlu:9$i1\y4Oi94!,BZib+4vȕfzIw ~ }Qqe~Қ~ Rڸyr=.YϹ$$WlJFT i]3݂j8r%Kk+2ZՓ/R*!ʕݎut>GLM(>Nq_\ HBo64:Jfѫ4R R*dIݔ_&FΚd?{;bNHI\!z\!"\9\m:"'3h~)θ&\yLiW1\!Qz\%4e1SNYٻ\kٳE.Wܻ 5= Qڸg<@8]kn8OH\!s:\!ed\=\ 儖NYp66 `K)n"X8 rL#P4*L#$i:^g)].ڟ%Y²Swv+Y* *{삔(ÔHcy jP R6߀IJKHd q}2r*RJNr5@2j+Bܞ7<iÃ0JH(Wr.S X(\!1J&c+\ GU^47ҙ|"BZP;RƶՓȕNA6뒙qڑRHF-^\?,tN0\ Zכ0Jȕ ksgv(rFd qLEE/W@i \ lgKt>GLCk6Jh!n#> L#u4{-a“LPTŸA9٧٠%zDJ "ViuRZ ~ت*o ;\!n0Zc+jr7>7alLF'ӻZMrNHD,Ih\SlPrʧ"WH~Ra^Q3Bڄ eBrT iFi3JHبt^`#VZ} ))J2>kw* W ڞƮ(Kd\If=+\&B\\!RzAr5z}5Ss>0oeެPKU3-w̌p[.sۧ[JL:ˆ%TfJ<]3n_%D*OЂA!aho7 h[7}]nY sOmG6 9mW)[gr]lY`1h]|w;Os[?ilUx|JW.sYUUl*kFyyW>^Il_{/E!@{|͞"@y_rS-ooܼW6 :+ꡉ .?N>HX(T;ܭLVvoaߴUݯ?.S^Nk_לY&6RBTc"y# )q:"P=@N/@^&qQ>.!WuVO2,Jgu &fRj'p *BxpJΗ] hjSB{\RMYtR3 ̗Vyl\U/o;j 7l_~azxݏ^o(-2\6oxnul]dZfФvwVT^yt٩rAW߿d8;&[\1@sEos:Ͻ2/ySx4@/K2hpmj 5PB@%FJ.qZx3V JUS%c5Y̛B26.ŰZ3YVVg-*iBZsډsh>c7 &}?٣飏 >CT~K|wUc5ٿOja.Oy3og6(WPwV_(Ʈ<杔.69 ߼MJ[EO)b+>xt}S┹|}"uiuGe#xc e;BYգE3B8c>T5(l/6o9?~///_9Wo~y -Ο uVؚխ귀[뇺5ok-n4ջjf2c;Ŭ fm+Gm>W?MW~XFl6:OÅr=F' 0e]K*[Z*/VBx/ VоHj_F(Зi_`u[_n|qc|t7#Gxsl !Lo*+]幫\J1th U; JRblK^֞4 vi_?Ͻs\1Y6]ݵmnV{[Xh&ӗ~LdM$d bB<%o@r2E8{p8p^5E!TdLBJP@!)Ʉ"ǚ8xZi3ͳ'v߹bU}qoA8Pu`==eegᙗ̃,X‘ŧFiAߙ-VV.]hY٠wd}|M'I_ _ot#~)hz66Xnj~Y5's=/*pz8N/f Y*xB<.Z@ҟ)yfO퀖,rB:kJh1c1v1v^K^󏌜91|7IyfJ/(;CvTkV**hQǧIГT*߻Sn: G@l Ñ$ϦW[ /DOl}qMQ4O@-J\?3(H8Ŝɫ׍Pop$rR)9c2A6!1&l|^,551GM5l-sp˧ʣ-2BC\t&$LfނU[ I[O2Um=ʶVW)"Y\$^itѦj-jevG&!yNͱH͛'J.5?fN9gebh}6qr)olKMul<]vՕ,6E3j;ypLF( b\OI+lh^a[!_R*1-HE˪j֧1'(ŠI:RZA5263g;2*ͰY lpƒbEfWYmtd͔UՍ(h40Ng_9b*EiS5)`&Q k9r$pV1p0VRm//P=:æv;9ApHpl' e+W(AZZ" 1}Q[EmN}bz<&Mce1h}* #,PR@C>x4Έ"L-\'u)#dfpM:{ƚ9cOc*61[ 73g;΃drcAfc_D'D g烫 ?-No׏˺Eky_6{%A{.*g' C!&r1&ɘE"LJ(s9e "kp>(SQ؏m/c4kb7e˝UGKGpLyܗ]]?>J&+]/Bi'N FURݤ9V"j_ N 26: ֕䕋-Dk/feP#!t@S]S`(%sI&%B0ZYvNE~#zv5f+'FQ^ϳ>)3y:Ynums޾"/.u#LraQ&A A HB R/2.9 I^fp63 zB y:HF1QpPٳuڲAQf/jd[!ќ(OtiTkMGAp$t8<3rxW"xk1rcAVEyQe!/.xH9 - 3Ҵf4M(MW?S}pokqH?Hhszp௙ƓTCӫs uڌ/暥'mBko-_~fT4 ߖƔ\EOX^$$ٟ^gi]iԭU=Mcy;.iP&yF7+/fgxq5>h0N+ABgȩ.x-; Zzcy^֥:VPfςYZb8Zk}?c GxO1wln1*uuTi_nsw9uSe&*tp[{Һ򷃟>.{EOg Rga;Ƥ31utZ(ý_wV]f:;[}d6=m^ _}?4_Տόѯ~ɻ+`Qv\;n~lLvm%X' d=K0`A ,R"k=(BJ<%$4Rګ(`r<އh$EI9+3 0Ϊj[NJ>+9;vѐn1]wDIՒ!GKa:P^wz>_~rͫtY.2 }(ZAsdi5 %VG#SZ),ujpGCP{|D=ca#٥\ar9&%Z ̜#)1y{Jt:?fg,+ )hєKRa۶6MotK]Pe+e3D %1MDB,:fol H:#IvN)K0;݋1v6}ȳ[E$n{1}"$JYJrCbde_DFFlTVuLys)ktfwt.7W(kSϟRb?<%zi7-tm~_G wgOw׎xk8u Φ LaF6V5z&HKѡDgE.~}n,%%1RCˌ,xⲰv`@sd@)m#ޑ0t!Lj.ɕ9*dJ&)!(RPKcpTKR27xlJ>eLc:3#sٻ.Ҟ7#j[#E{` RkF́RQod@ d!IWR~Iݑ}_RC<:/5~ur=anJԳI1Nz@\_}>:+- 瞘4 .b}.^&ktED?(fepXh`.̦]O|(MRz:On\>+~ _@TB  dϩ((|P(Nq?Fd2!ESyZ4r㵯O 'P5n{DDŽ긪-Aind I 壛kd$^ҦC uŭB.ux]h2`ql|AmucV_9cXq.LR@gF`dӅ0Kҧ#=P1bu$2'Õ8P^r02a؉?(0QU ʖk+MY)j8* BRe7Ц2{ @chJdh)39SXW7,!G )lLRF)R1KtlFwATv؈~c<6//ڀ@ݝLNW)wb6cCv`h"k6#LjJc]Cs5%=;3Z1d6`*Y4QleVR1s8┲OCi9"@k6୽h RʕS+N]>o)k2|#M0+:M{P=(/)KԢݭ^b` "*wKm4(9fNbZҋĭNNr\rA $^.ʙI.1M:h2< =wRɠQ9s4;('y2rbiTMynIZ[ lRF)uV4,gr{L+èx-iL E*XfAP)2}եSJ'hk2c5PFrӉ@"0y-?S[g39k#І mE@.g4Ēg  Q'f"; ^؟w~vo1. S!Q(P?ȴV%S NY %(RC,mPS(Vm5/lFۘiDZG)\P1W,r34)֠E[+{RYTy(5('.yM3 @KlNX3Zc:H^nյZ ;vZҪή|߻xޟۅjW23G Aoi7-U\.$˯  &(y?T9 ɝ_}f A7(de>Xu%lS._W3k!2QcSŬtV<TAC.䃾]{Lx7D|4cS&8¯ 2&R5$ 8ćlI Vˣ\-o;gv̕sh;W 8R'嵋*x-2R(+:sI%T(Rܬp1Hg~{g )4 y(r⎅yƗWsL2)+O_p fS*Z&ߵjiU?SdL*3U ƪf4?"M b%9zFL PK>:;$یGKHqIF3I*,K1L$d Xʔ-rRGIq kd|Ĭ=2Ļr]oK쵭"{&)cqip zG\|&i=@}'2%~'FPm&_i4v6Ĉ3͑w0Th](:݉+=k4XWqzsw ~^]>U%#~;9Ly{qWOp']&sa sB :G2),RFVTKn$ zz#Dd^0OSO9^%?MbtKE.mwL[ntk+lqɌ԰]W^u`Tu9&_dm"-YO!fo&*jO>34(m7[0+ec$׷/=ޞMFe>=a[VXsD cY4);ypTn0]!DŽ/댒{ 6;x{>{i+6oEev.QWLgHQQטPWTJ%9S N G_ϲ~PvP}rulݎ/̆g:l;^x/yZkT5s (&b~@SZosuvk::iE#w$'=8 VIsPou~ BÂ'J_弆)R'C#7BJ{275]Uy8l,/d0k8 -$'"{ym .Po oO{L]dZ!qH7)H3ϢN}") t y}}]yk5Vv~w㜊FWqe?)j~7̊"* ŀw#v)VƌLKNmsyuby^-B|Xg~u|vW*JO{}ٱm^lx:HVrywuyC$WB ԗdMLA ܃ssi=Ó_I 6Fxh>8\;rśVYfR!yh$;zW7ʘG-9uJ .8{<_|%bFvᄂp],=oA?b~Ƒifta0 -;y%9ݝ'm[e$K2U_XU*{y{YwT&Uu7s)X$_g$[qwW?3W^wcevY$,זjE$M B\f9yFNm\뛒 o胑@Uujz M~GiM;FrRn$Lm3ӽ\,#g~@Ww WTi1_:XWw&+}yG y1~߭B B<6(GHЪTmR6$`$n^M%+r䈔 0>rI0(9#kD΍`a#c(!LH1Rn&'> T$+jOT՞ *oBʩZJf5P-n&)۸"B>f0B%H@]+ Ly<2dDeaնrrTm**Fv2,I,I`1Q2Å2t7cc$YbR8Ekdi:?-PmHʜH?+O'M TD-u<̨B{]A(feNvZZTJvBԞ--Xfs#Cd38r7G`w4BZ (!RFZ uKz}mOga[g92>6˵J!w߼I։adenDRN1Uli)0s +2R,LzP:0!s1#Ws^nNEa|nW_]X=.N_O)=oӋR ;wj[ 7!YYL.|>[M*5^Q.ev/L!u { ƀѡO^_؋bP#G{$/α@{ɡ玃kif kQpUVHUPYVO ɬ( jymF"&hL\U4S-23/$*^!Ŵb 3@ ,Ll)b6ƴZQYAt^~2 Ccg^̡ J !w ҳ#T4S}L{>wȖA(&N_;Tyۏ6 3ܔ `{t0џ0Ɣ_`xؒxq ;nk؝˧VIOLH:/gծ%8Z=o)O1UNT1v0]_ sThƈv! +ʫpSS])f `^Y 1U8#F AYsN؅$e36W"i$Klj*Re6HIrQfjg=Ύ J }x[T1rH0\aWޒ'w}DI+aq^`Netr^r>3YFjRaԂ&p".:XB,X2~~`GPI3ng_:qZ٭gOЧ.ӽșg 3⑇´z:zMt!z=Zn!H <7HT@;73ylL4 .D).}c]Iכie{},lheCD;-@;E9>[̳ZU0P ecE6  Y!Řg_V+Lwa=쌳,P6g;it85۬(*4ʱ(s6k؜s6?ʚ1E3}*O}8ų[cT 8X#2γm 4c Kk,5bf ѱIgeTؿ~'̳}.跙"C]ʌY=kYUs`$0ϲފ1m(BW21$DzA1x6gM:\Ʈz47'@kn0FNljZF$xVoNCrZ<9bf8JcR6ֳK3نh ʩ>/1h ΥF\I,(0\낃LlBhfZg},v{Vԙg{ nQb<&(DAxb?YYu+Í{V)8m U ⦆IcӯܳWuJ,u0cƖ=lBTF:'ʳؓo=c0Cf<+*¨YiǮكY=kyU'ư׍W˜gpv̋Yn_7aۋ1dV1vK%,^y"y|&69u ږ~}}qQ,ouZq$ *!ټli I1/귞NQJpPGmJd[aY7Ez0q0w%1Ɩ/n)^s&ЯLD3\ }oq pY%K2+-Qְ8*{ѷ,]%D]VI?yX\ix9]X/n3VT!054 rrA0h R| 1έq̓VBFn,7ݱ3Yص>~,YG[40FUĉϥYy%ɓ{¹%QHZ;=彍M}7nC*0LJag*ɲ0DQ-)}Az 71ply!k;:~m?"\#Q5e,خu 0l45؊\: qLۧ ànm^^&$u3众AWvuQR @GJ#X8-Gɮƒ'Ǒ I>eY-ӺlW' Q$smZW9p'q;є #pH37Q*ir{Dx-7!N5 H\mFQN"r߾JCeݺ87h]jbnH-kubZ}K~BȖ!#M u8tʃL/e1gdا]h^W Y N\͑~# 5d_.IÁ]T,%$VוZ_C}R}: Bj)E:Đ,D -M; *`tg2<ڦRʦ\Eq(׻F곴q\馎+쬌6_LN}AV2#XCq\Qv#4j >-z?tn2C.?gI7߅ΐTCSzݯB4c.@B*m'k7cDI ,!-7=ID#%yW=]\"⹼&[]EaPtVO|N)@BC>r{ue[IdS[uZzuک("ϟfQ۴㨮nwsI&ZnKErCXPzčb%۶„?HIx%(YCnK.T v@3srXg|&#K5N zd隓p.ȅEԶe "8[p h![VQt+(!.mcm'&LsYYP)Mr6 Ʉ%i`2s#~F5|^q.Z)9$10k,:uӜ`^$4zb $ۖorA $xNt5vO +a*" !MxZv$o!|4KbiNhZUT"ES"s#HAJ49l6\mz)1 dqU=]JP-!mב>*JE r3^'}fpYR(we͍(qqdD"ah$frpFd9 8ԁhݹ4-:5UJrI#V0FOTNLD w_۟cbsѳciLVB*Ÿaa:i)}T* NF`$:{msփbÞFEB9۩:Wx( (̮z 1(]$Ӫ+>>a̞,x ?yK6ahbRW~:Kr U7p:dZ\ –i5K+o/9~8G٨M|//VN>>P(T1iQᡗ#eZJXwh؁!Yn\wk~q\ ‰q vNN$2]GTliԻPW|e8œ0c-@n>SU,/ES5S=g}/IEti1_E姶J8yb2ǷEfgiEŬ^6._}SnHw;jxȇxj»م%ζ?^E{HkV@ s=!oǏ 1W ^9|R(OVIX cݿoIz|'3~qV۸2Mdz-ܺ>:ww%+n  g<[*IS_K<1z,qhc%T{kxW-۾rEQJ|u&(@U\  `Js_,^,ig%/ѷi"0vY].Z\i~m\bTWWvcQma҂k_g[%r4lvȫVrgZ" .9 #BûP;MPJC;ܻ !e`S<*/T >0x3)VQt}:OLF '9 )uB>BaMbtvd)#GOq@+ޔ`4h:ug⧋0NO+e¾,ymQ~jӓTL"090/Bl$AEM8rGNk_nؿ(Myă\.?B6LSקL'aXwp*x I,pXfOh3cdܞ(eY̴Hq0'R3&dy@W|*RE8&wu6ǥDjⱻS?p4|T~Rg9z4+]b{0S9td0:oLy;45$]:i ~̞JGr!XG Cgܛ`{hɰFHrh=0#Fʞ 4tM*7HvEMtkZʑ!I?x>3wD65TTʡ'8]fDG.cEžWN ؝Џxc{Ca[: iW9*}n+=ptN\d@ZKjC{ND]zdX;uO9S%]+aL&ʽc{(ެcXYeտCPwxrr0<(E.]TOjI)PCWLP-ZCnx+,4 ,{&{Q 0Y` C}\<HynR+ˉˍI:r K?chk7L.bic::m-Ek#$ E-\+%CՂ{Ovze/xeYv_,M;cDkdgHUL4GbOrjTx=C\oPFm5+k? PhcԾK3N R`פ<'G2-6yUg$ٞYh Ƿĥڈ x*Ec?H <Jo_ @4[,ӬkK (nGǗ?B*H=܏TC&&Jos~DjU>8i!տ.?ƿ䳧xKTB };hǝv~L`~r=Np&Uq"Ox0ӕL%M $jFl{.D^fR1J;wË46E\1d^'u^Zz,n"^/﷣)Z{øCrP e%'^f3|aI ڞ{6Lf!, ާ\d+X__WJ|^iiE;(}q~RiN&2XaSD24dY|%~&G%n3,YNx2I񺤀/56UC\9J"RY"%x;7s`e,8 S.$,xCvU+H&E923D$"D1v.b:>C0,oâ)J A^J(Ī2#R#9jcMo|/>s<&qVAEg_GUCզA>BƃVԃ^qiDں q9BPVư0뎛@+y\³K#s}gO.b3DGdx^- {h8N OAKޖ]T{dG<&j1ʖ-XdGc4`jFfUmy$+ ؖD#PP*mJ [m c=E'sϞr'z+8S jߟʞK 1cZ)|<oj̿h-2o O(-ZgX~iQ`U=:u& ?v Hޫ9Magk2td6( T"LЋ3ß?z3׿!?5{ʿo- Ǎx7We\oEoUߢ. m7󹽼V;󗛣9rlxqjV3ث*rbX,϶_nO7SXsXmoFӛշf 2ʛt~ž?IDxTPE'_V^Ds՘yI7ץOd8W(ķ(Kd)) 1g cDt)NIBL@)Scf~LJ~d>8Ua`}ȡg`ʊCڱ(o;P6QVDjRj<Uo.'P( j>)DNQbHօJMn5Sf3[mW*" V/D8C%$jRL%8ɏ{ւc3IpQ8@1a {1(lå mpЌwd*Y+T*SI.FRKR?UzFnxi'tӅ8+ p91(-AQXbY"҅,bN֢d @D~ ݐH*L"&mamy܀=9K$$EG ($$0I5*{ub.b|"6ݯ(2 (D#**` VR!SA[öyZCfb/(|[S-.d ɔApd m@ cl*Tff9f$$u%M R$)$HaS9G@ "fDa4Ew՛bRNi8WӞ\bdQ9KR naNwmMnh~QUNd?$US /"U4 KdA\%i|3=}3R yҔXO:8ӱ~pmCω&J9y1Kq"BlȍB Η^*p3OxU}5e튐P#u@Hˮi";,~%AC ̤lRKM͑iۺy*$ (xdϤlu灛4q,xٞDcO Nc^1^S @fV`%7nX4 cq̞MC1R  #ۡ*Z ="!,^1M8 ߊbWm@[itL13tT0ַCM߽娩[;Vĉ{keS㕪)M$cUVKE[9G:P;xm6yXZ܏k}6Z|>hE|` ET52^bI^A,VTUo CWizF 2;3M(ɕUi^3RZQYDR?k,=kÛQg??=y|P'/GZL3, n]nȳa$olgkk*3R_B4 K4*9<@ie UUi͵7Fg-7g >"Fg@sVYJhWb[+ҳ8ai.[(B*N7CayחWa8c7,#bB%!IXHd򤃽ӰZvкJ&ɃN/oއ'nE3pudXi>jlCmt$J^F1P3xhĉ XL)ko*u^b1+FF%NW%FcJҘ |rcVUPhGx a.IbߨcEAaFp]$) Fl,<:4,-zMmu_Oʥ۔{rs@EEo{\e.GarW܋?@r(=Y6'7|a2U2K$_mdd4܂9a+0BFP2yLo05h}Zפg FLښ<%wIZ(,Y~gΡeR'䚆Nѕ~U.R.A9m wU=M"D1TTzv tTl>t1l<1xՊ(OǣiZ^ᦒ.lu̘"_#!GQP~?ⱦB,؜eEo Ŋ*qw ZF `${}ٲ>Mu]Ng˲,| be#X!Y&-d./vt}14wL!L Wk(Pd鿛~soꆝ?>*t^ sg/穋%vY#^J\XX}(84!1]22WSQ݉GG6t 4=-u#9K<95#EnMcf쨦ԊDﲽe J%ER*y n>[˚0оսe΄3lq&9礉}kanz+/OZz׎]rFG LJcэ UA>0HX><}(}A%pkd\E6Wֈv*}:Cǯ*íuРef}zó"Վf38׳]pNW#;yBT5D(݉(LzDxcP1t=eK74 T0~ =dghnl)m(J?tgae2U+[#Y9(?pL3$ao3kHҁS}2>͘=yNIEΈLK\n >I7y/fѴ6ڦIF4[F'%u"29ꏃ4jIWI8oծ}_maWlJf?Fix08ZoS8i`2w%t8kB)]>D}SB9jA)|}ًd?`?F_͉[~?|rwܶylקu 7g}|neםf-(OuqwO|4TREft7-?gǻr4G/l2_]4ow]FtjdJ/VMLJPDyES3&jp`-FaZ dPEub;P:p'Yՠm /bIJzD.J%"pmim ;oi]O%$G5dvҶinW#ژ1{!S8cJ}f28$WӒ κ1,cTk#äҹN&i?0ި㾪 fOlcTM ctWg4:eT!5epE+&)+>S:LkDH+9Y,ơ<#MFX*s &Cf6~j: L69Z"o Zݦ2b*{OcRo78l,;q%h=ba|"ﭹ*1Keg=N(6N18Vj9Ƴ XdI$471 m}9zͭ|:  kY,bF0j/(t=ܣ@-SG%Nr7ΎdðC£]{ 㠱mA3?)efwLu lk;  Ӽ3fy}Yv29nu[N0Ĺin*eַд+b]5[-59Lm5T\>mU ~{8<%kژ"w4':-b 4 $g`&9Y/XX-_;MbèP],Rђ¦FF_b^(,ck}ںSbI?MI(a+l? 9J/ZGةٚJ@FYI-yuuFE*+5TXWPD(gmp80i+1Lm8˥<خe0hxk az~Zwt݆)ew֞:H8&b9svyTZ =GBQ<<Rzn`uk9}!흨mLQ^0{}HƭKXkh1q~i=QKD5&W\]?osD]3*[#]kǨGTsOJ]B,c0N=6|p &!g8Vhi4r0+s#^UVa `rڑ|_SF6]>rO`ç'sbUC~ޠ@`%^--dn{L#B{Pl,;q9fywu%Ґa i[)4FC=8d]_77Иwk^*n[A3%Q Am[^ u9#2M,q $ >5?*t7_?TYO {%F[4 LoX&"Ma8jHA]2E<{1*$6w6f~UD`*RKҡ/ʂ;qvg^iEyi0!3z0:V)=T9RˎrS +&9W+ n1=FƁyb7@hkçq1reoTɖúT1~Dse]ۢtjپ7Ba2 %i|fQ%$Zÿ_YmxZ~ a).[w{q5bL_4'愒 Ss%4HQ:^a|>KPV9+n]u۬w]Cծu5?v(Y.fmZNf2OwdqVM`;F~t8KuGws6zi4G/?]6q~zEզ=uې&Sw q).*/?Z дNg__@.`hM#X,6.bw.Gn4M[c1L$K-%Sn`g=֣I֋U?!sw7'w.'ewzjL*F6]U[ەEOfLKlv`w^Zr9PN.5ki S+~>^*Y]©ȬIJ"QLE Yɉ |ܗ]@iPEBPs]]U,K|#QxT%H*zșD`#{)_WG0jiE1t)ѭEMfz4g\52 >r RcRtFb8O!%n.pҿO + BE}OIZ]W<$H#.cZ8x\~Y!\eh#aƩ ":;:+ȢӢ IítY>@}r_,,jbXn,K~fQlJSڄp/?7#ݍ| ~@cT\mh++y[|A{)g s̨.d_v_ REJo}E5Ie ٫웦K]~JΑ7ԥj-/_kX;% X)* ޤ. E204SpA0.xgy=/s,z"u22; : 5Vܶi4FGn$7߯\ N`-h6c  M*"@v$<X{&3E:kE ĮQ1\0_9e]k|]BX:4>*a[iӴTW,16)b,ʉ8|<"Po,N5$ms|6wMRrc.Ќ4A{,Z":"W6c%g΋:RD[q`4_>ή$ sƯPT]q/2pEf"(C&59#,B]dSz1dBg_0خbuRƣ]F:I3A&-J2/iяlq5 KO +-\8|\p!:px(AIVgH?58܁MށoƝ8 Z1mU6sƨ-;.@ATQ[)fr%yv{upׂպF!1,+0J0(O>èsWȒ+_ޗ۵ kpQeNߚ{&7°[H8P߳3JҌq5:ݞi2n<aba$Csy1}x䗟?_^/6 ߎpgȰ1>'G֋(Frͺu1 bo_ ߋ)Y7iG-ំaN]M㶫]UCs&Ն)n2hJ)@\LiɆ1JVeVۡ5<0nNl<:ځl*,Io+'JS[<^/hf܊<{%xI<+pa$)+3C%3 TuY}BF!R*Fݎ.k?)BaVkҦرyryItx:(iaQ])(PsP{x)#oSƬ4ceGIM !9R=#۝E<,d ?ߜq|a_StiF<#<6>O16'~IA _U;jzGMZ XXYq "%(i峔lp*,†'ens+-zO hD%.7r lP/hft0cy8s' Qo8Ġu5Y3W% ՂW1˘%SO ,Xo2a[%fUZkP|n.K•2b]RPQ7hS1p uyA G3C ϸQV: 0zlJ@~*ۅ1v'eI 6W_[ /aa@)%xAAOp dg$6&^e)m=ЭG(4K1Ou+7j=4B_p ڼIRu4&}Oh Y TW>k]p~cnl e:Wc„'ei l 9&'}SN_h5|n0\g&M猖^Eπ22''=l՚1N?݂J$TbOh*^qYE);cvq:ˬ};d,pVxJME@::1r2+0-7b)ʚVeU+,%{[m(*鑫%YzAS)AZcJAq@WjJm(3iaXiD9ߢ񨵔6^3ՋwCSJ2VjXgq8m79+ifNmO3pN &9H8!!eQVMSFYr/G7% 2)kcvM &,YIמ.wbslI ucR=(6wb/W86wuia/1rm8k>ZW veȵP rO(T{d@ѕNJ-\*jrc%Q7)cTqz)c WA1uCFo~ϧ'BP1=e27{#Z0 0+GyH7j Dhx KĥhNf̂:,7:;Y zt\'iuN s.K%$F$8u"b,LA?'u~> Qȅ9B_<m2RdܿqOQƨ`]f lbMJuJ+Ēt3|IxZnTtu'Zžk`t|ߊvf&t>󭏿ҝ-GVSI}y1j)Xs$;+ۧ*4IE D;1 8 /i34/d1+M"(tQeQYeEYY-~R\SzlIVz{@<蟁~wFE)g؛]5B袐"' " @҈4ɔg6 H*tKɩs DHR 6;@Lڮz|Qӱ=*]2%6ۨBBbsljObK@JXnoԿ6{őAfU$gJ@$sB)c*Q fRʕU ) Ls Tt/Z>b<2+htEI󏏣*+BUVuŴҋxfw IPwRmyÕ&֢hm*KKima Fλ;ʭQm(@w,^+(b *bcUjqB xoG8ɭ9? 7W5%թ_1]KS^k31KP?z%C?Z,TO J?jfAw1rSw3X_V<$Lt1c3n3}֚ƎqQT ~ɍI h6,yR (7im!RrioJ˯?uL>Oi(x}ƀ8%i$8èxtw>f :VKuN1ǡ(c2M98 @sa=pڻ OygsTl2T4%|.mI4F;+PȬ,)@>kn rYYJ0t9ntMkh ^0'CsG͌- 2WhPE-+]H\1 K4F?#UϋS,9+r<~]A+3Ԝm'9ֹ=xg>xQgˍߙ q~35}&itNE,t08w^^3=VJѻ*?Nag?Yy-4D'J4 "wEڗ>.h,xdSk&OQG.N[T D"Mm,>0qf= A3c6.%( s]8|0O5fnͼ`_Ye{-8,C˘ Z(]`ߔrw:lRIwâ Vڼpح/V n%`'mTTmn@o ( !{s@y֨ݣ =?ϔi0fF'C1QQ?_wa"ݰ?vģ6 c y_?Dc S \ǐ4|d9|1mmK Me+ezY-7A"1"][o;+_vXBb<,32CWb;I`RN.Ů겝 鸍b(^>$B$Ms1*-4T`ڮv蝫ℾ||u^6"X+3}7@ۇЃ{(`h7LVu )8@6 6j rͅҾM:- [7ȋt࿯n7 s1.\kͽ{rmKͿji&e뎋II1z\)K;\> ] IՐQwJ1#A+6j4)2ZSu5IQox%P^{Fwwc :55COU>(،5͛lK_.ͣᄉ{Wh0ryy{Y s/1";9ha1 -ƍ)SY(>6*{$7B4iV/F i=V>c#=&s}l1Ԯ2^.$|LE:E&dHy^FaR=u6Zh,QZh,m{B3laGQNCb΃?}-{ТHӻx?nXP /^-4tvx!u(TdZQf7^ ݢ(B)$SUƍ}MBT8QoJm-tZn{&|3spUOOg マ[g^;[]md"l z޽@{ffhѣ}7lw:RyCgl$6dD}jyp 8 sE mya6݄JZ/ V?Ƥ1$SxL91J{}(цJZ֘ M @&PAP5b4>Ihp$o?d͆SսWl OM7"{iA9]"'vmBOf0cfT@( v:қ#K]] 7$AQj}hmíyR;av_ՂD9V1}*ɼ{he!"),Is }U&9~ZCbRATZ-c ZjM4LxM< PS2+* oJhX/iL8t\\~\t?!QKmdFn{iruN:ͺ **?YerqmBcK`G[@|#Խ7S1FHv`苧zR z'/Qg\7Q7^ɶn^2b2ib-.\b5(" E›SZZvE*T&}96uC{ z1"K1t4 "XC"HNf%~ޔ7[shٞ'#> 봎)*hf/b@F6M*54SdG6kV1H- ×!_1,T[Ѣyq5\sv*#$V۩{,'-N)`S?Fe^ubIcR\w{e~nűYPB,]ɍaݔ$@QVR^c`%(ۓeK}h(o3jUA`VkZ>݁"`8^w4Ӌx\u19]p>ۼGp#],>_>ե^=a,Mɽ/ hl"`x>|j*T'rIc sa!G 6C.9QɛnGpL;O6;ռd۸k (/,>c4Ԑ}o%^ںhB6~+۫kۍ YUyvuR!9uMh݅v]T㦟4̨6fb~<5q73~l3Wc`UL@HIi +4F*DEASr3rrdedcs) y /ĝ$Y JS/)hRVyҊVԺaCQJ9i5'P}z>g84lݮhgt?Oc+doO~T*`ZDeHH(sz.F:l3f߾]͟"7$L%6#7gMDAܓՙRsި"َLB|k˨χ[D gIX qQgeŎibtE)X9tX񨽵6k1)E wVK 4bW{U";U]ar hcpD#Ӈ쳻LIdd&҄HJN'Tgo,lBHNGdy<\e}l;Ã_V?kæ؅|}'&Q9B2,d:/%s,0g:LFfes DA hIA屙ې96|]M<6D!S-)w>Jm&y7s/_3P{Ȟ}uۍ01?]?,Wu/%_ysś8d tqK Дe؛Ň*Qtl^/NZͿy}۝?Wu?/N>\'|էSR>O'Q]8xs'rg[|~_u{c:UE[z)CQZ*k͞=dCTStۘyՆ,;);Ɣ+c^ o Lm֭i/2 1G#Ơ)&6>w 6aoW'n'>LG+xJ1c$>I' Q1&S}E RQbR*eFj$O=<5^F@$+}ۓZ+٨JC,s(:"?ak(w_ݙc(zmD~wEtFCmcmKOi90941[` {оD3FC2wmCK4X I36F-Fi?U(dʙGSU  Al-3)JM"mҠ2YUuc4=͝j=z.Zs " l TSG`e!kۍHBeRKdD=ٞ x``^ *Q(lY/d"ͦd5ucfy̬BFVG%[3q!|.Z[ S-|M`e+|,?/ilf؆\= a\Q9'A FfƯD bx I73udo9?>.~z[s=rzxy?OHrHU{DD1pp5&tR,v߯!o|-]t-%Y;J%aZcK熻rv7'c=z^éRG_Ѣ6z3j&kDBneXM [kAqJ T;醯^I*;'E[4ɶnEary^}}Yiz1&"5UN*S(BOKmTjZ|S גYBPu0L+$f=W+L漵`>DX1(AjVB>F=';ul@h'ph{? -geEۮQ^ЩVgϏ2vwt]GMՈ[״kgݕS&x.硯Jsű 2Q50 LEAH JۀwO0Hf= MLَ9,L? ̼Ю+fMZnF'-%[ꑍLXZ^i63EqWD YQNs봥pkhKnIL%=‘1q;APKu;{;iبY=EBWl}i+}NM6(rOoucкNӯӔkU_jvLMmzuQ^>NyF|#Ofi4Ń=Ol&# :̼tjz;i8gw<;,J~|Xb_S :Nn?ûn[xN_ј\>՛W)~Y1 3Izo&~&3ވQ]VJ,^JٛRIpi)4A y_'>=;'AcR/΋aC;Ոdvzy%g_wbp2w^9H`cy3ތoXsEdcvq`O0OLfj2f6fCWӈ, a0]>ni{?L^l5_gzoS@rv\~T)1[PE<h)bZ;H)wy/m?b poƖc2\#O>SL Vk2^b53C4@ceW =}s4!aUj%byͥ.prNcɔRj!@AȱXj+3a!*-փԜQ /R[ ͤ^،0Sj1BcOvJVW&FW?Ȗ{fG1O4f l;6 ۃGՖl )ܻ7pGʳw0oq?wx w{3< elSE۾>Ǣt643o\8o=?3יn7>\_X+|soSj>Gխ҂Z=G¹޿=} uXV!Ջ_ 6WHI/vqOU4:w26QC [㰴w@?[\1ˠE՗F =pޯEBEy[Y);Jydi0HakΏJɼRbr0ޝA,NSy"(k^Ks_1vQuYS9wJ_/?|HoNpVKa-.*ZP>5jgOg?s(]oqvԺ4gKO9?r91?abC=l *-GT0A=Άt9hkՕ!73 kaBP! whi;≽vûv1v~nA}_}`7s^nV߈;jf'C>7#с4YMo7}$a0Gg:O@_pޔ$;(𗫍GӒ\K>JdCsF-@C%c·CԚMHL}IbaOæ\3E 5JmQjCRj6fZbO; tr҂pe6#U=rr a#*daߪgԚ ʗ+%~q Uh ;.D8:`!1qlKsM ͤIF0pٲpm=;Z_vl5HLVYE Έňj5mc6$Ɛe? 1ּ DAJdC} qCĊ!?̊8z*%FRA\r9YH +>+w[uI}bS_* 0+zY޿=\ @/f:rxpFH6 $ZjtdjΡA+R,)1#QLgL̮y$8N./w[z_' 1{e1ł9 C; cȏ򒹍~ [%c$"LpՙYKAa740+M,)dDv psV g5,d]p,bU j$Fv[BͨP;>8[S,9jMv arA0}6N tXx+V yzC&/~r8Ea5IQ6n/`BǢE"PmEUg!L o*sbHI=t¶;j̈xPH@ VԲLSHçceFkToM 5 +HTdZUF̠Մ肎Ʀ>./; Leu-ޙX뻳456u&aPxhLЊr=XeL;!rQ=ӡ5TRޡu[Pΐ"0Eգ%ޮwizDɩ[ hʑ RHaDڤF[0^6FId$[&gv)Yk E/QTj=UEO˄N(d |#{ԚWUśnh깺W`#E-NvhJ:NcqG%GlWs%9-]\YJ!ma)fB$9=,T~T5I@هh`lkbwl6XyӊaUC>nQo!<䇙!-D$:C~eț FX*ܟWVF.?,KF @0w:IVeFr,tm_IDaH ]䐲2Z R-,I'(]R{'0z b ;D[Y?/'1Mpm4O,bdFA0GO q Ĺ /ɐg{}{^pcAޝ=dwdӧF?F'F?ètO_txE9[?h9d[Y\eșߝe$S^Ec]Y:ZVIQUҴjcYʗC]An\fkgH/tҡdWN% ,KlKJؒT%&t_ {PiksymVyɐ(wЛy)˻lm;K%|m)`:U[^Z)0`ޟ61ݣ'5$}#+IJuV@rMZ`,\敤y%i3~VBߤbccRBEnbhBO ->xgjn[´gjUtY Zg=1Q F3S-(c<1ys&2 3'z1+u`No,nΎ6 m+ lS kޕƑ"e!!{<qnKl3^_FYJUYrRrHVҙ  ~3JBܿ6 z(<@ae J[5 Gra 3Poi{L ~=nom{U$L3\e>~\DC*oϧ y3Gk:|C8S?xb \|FkwhGIzr@\oqח0M۬チcvWw#ȑ{b?E hSU~xr} w f7f?*yb}/y9>Ɑ\Z@g^ r䞽ÛgtLtƤ4E- [!G˂4Θt##p< f{$ºc Z.paľ`{C3BR៾}a (Q3R#Ԡ#ߺ A7Gsg{<9]IPZ21-!(${dcÆ+LlHټŷ o%fp?5N&D G9{Xl}'MYV:0QJr;o1rñjOP  K|*H1 /oEssZK/zC ؆ {u1OG٩^;jbStX.WOٿ].X7廽e6qdTeO0c a-79j|ׄ-ÖbztW=Jލz(ˋwy=]C/h-%@RmH)V5%ues-;H8`b,4P\Dg!BhU[۠&ay?GZ YXldSd>–b)# {c>v0}1,No(zMTP֔Rˆ=*ѨFD6) 4 ԡ0nׂ9x\s*%% mF5IBά \o)nyɟ"^/_vi{­MLt9%D>XvYhOzE\MUMJ3'.'YZ.=}$_N.\~ DR%FԠ bN/om;,>hbVW6LM*_?LLp)׮3;ӯ/y!U30\;N"k9TDnQ:ر_ޚP/y_VۨdrڻIt(jJ^%\88ˎDUEunyl`serMԡ2 JHޏQA!p؛΄m*g{5LYB83:j߼F3y CdzM<ꈉm'jd2@3)B ]J 3-tR!%<<)ͽefڰՙڧ)[XbV|؅H7 `sȥK^ ݠXߊs#8RZ#p-~Gq$9K[Bh.~nš7|8侥nS&5>p͠p5ZJ܆2I \ɁHhW)0uYHA+5x+*. 9[4)BIΦ> L<rG2ޫƜT*.j@t8 grg&wFΗ>bh~ :9@<@,#X)ԂVE %;V_iͶVږ]UÆX8lp>h™ OF;lX+17WAj9w?Mv`)ZVX[Ro(W Z&嶵vL}$F]Or󇗙H)&sR ȽÍs]( F_a@%tB *pl7fs[LK-A̭qPZlJ4.KiTLJr 1"HIUtw!Vd$W$se=;L@KԒw-.|7ǧg/QܜT8 g1uí. (vΜ7R82L,9RQiY+(Ѩ|P^H Cj j<@7> qh zRgŽޥ$[Ibl{7%3b/pbotחSy8+n!AcT Cn̆C:*H8ek, McA14?T'<7}6@f&IwQ3GU:n*8ЉFmRbCQՌ6>Nᜋ.ϊ?ҹRD5?VJYsFm1.lV9z/4T!:HK JVJhhJ #of= qױ]`Z-|/$b^!a-ӳ1!Wnu Ƴ0Lz|63UϭܖvYPIѵ榈 nC6_c:YgF~;LmaSP%>&iLQ.rf7ZŪƤJTC v己 -$ע*VyJ-BOL`S>:A*DV i[ },)j!԰٭Ģ!)$W{]J e84B+U͎N a H쭛K?wzKQDoy1Z^ wegz CfCskQ~WK2A{UF]3#:az_ξ%s"^LaJX/)YjRerY)obYvlf6=Vl~51[x\,xWቊc튟[.zoZY6VeeW9V#LƊa{ 3sy l^uz5GdKG>s./s|q&LübP|n<8NxwN>iԩHG=[y!"hΡm6}shDBlG*e+fU*qs?q݋8W)CQFg,uۆP,߮%ӯ%_ҒEoWg5G}=czub:{OT 5lI^vzwÅЉzW%oq>~_%i̐?\=n`]u#Kr=O;t:~M O|b'b 4ZA]7|~8-^(ln.١:xc@9ɰ]ح:?q}υ˚]hKԣG8#IVf~ɋѸ=xc碎qw-qgzSϙV@<*}*D$LOԘ} T0rOx^^(=\\8bXHz}Θt|cb{r h-^ ׼ ܏DZ; w 7'QN&PSˉqS[,W$xA/oK׼.`)َj+}1j'.yKkfNt`N}t#0GBF3ogQi 6a=׼ G<1$׳Fw'|dPcLqh:b՚>5"Uc~K Qo5j1&E`L$ܙ9;,*wwW-wel\_b1bN㱑>pl7\^]ْ[J{|ܟ+pg傍, ^LxLnwVF$8 ^Hf:aq9A2aۓn웨z@xj^^e*^sQ/sJP99pyRy{z;vwfsV&z_<]OI}}tc%PT8Й s(U(F+ ?4v=BĤOT-.bN?eE1&jdDC@VEErYJ)zZ3d jx_:im޻A2WD*S=sJ& 0} =a73Q5UwHO/}!E\'ήw:)WuG{۔mOr;,D*V\NCo5FJR,6_1炽Ixm]P-$b$dFTjpe+[*Qk< hٻ6}@ǵ`ܠVM5ɗ(G E6&mQ[KPSoMhb d*N V]Z#[d7W#ϧ(æLQ Lݻ>mwdQ2MZr1f8qIMSo\Z'[6I ٌHL{ײN~Dא\2FkՔ{xl$v ]LZf׉T&ڞ^a!Ʈ}ę0":OXw \fT_"p s =@90t=zU=Hr暆U W>`ƀ10ٝF%۹zn8հ1K~zG Ol;o+oڽU޴i0nLv Mg6g9MW(yRhp"f.L\+:G>jRu=vYR*qHRF˓mj+g[Қ!FU?!e4TxlFIoKyƳ1bRƌ$vmy#Oqg#YaOؾ5lak\v-T"@]lpj~9ZYbaXWZ's)%c $Έ)C wE[͝"^>5LMgS X&o6wMZl <guĭ¾b3h`W`ѡFZu*߇AFοr7A Ls;m)3 {f'<~r{8kXd}>ۉcg??n]4o=иdaJ$E 9- `)@4⾦JхfF )/XʞqlpȐd=iǐC3 5Ś["&[&S2= {$Z%C>U@ (d|Kֽԟ58zY z7hM}J<v'>c=^JN8re?~4رwuܜ~Oj~ed(smĊoEd{n-x)8] xk/?Mi#_;TZz tʎhQJ0Vkœ"Fot2ti=eXF*+ _rdULPDbstRhhpDCeCtdI-< CO\f:'o^yMy&z2:y^r/M'S&鯪l3LcHJ6˵O^KEJF)嘛Y!F+pdK} ƕFڋNʼnzq^G,tyF=@=7뭰#[0bѨ_1:;Q>JrYD7c-Riv1/ڿDƃn&4Lj3yWӽ$g9ڠzXq(RxFCuWu:sf>#i1nbir8It+u S$޻gQ#-mw`T;l',=y :2)?aߙ*{>7 SBM~oozKZe2eV&O}2N8PפkƬ~ r϶=?>|ۻI0M}M`7L~jdtð4fz0-_ dc3Klq aoʾv&vû|(k$)-ʡ2ȍ)r{vνhBK+0}M3r.#S1)$l -<'38 =JԂo[*vqAucN\?|V_OތL7e5!)NoRTm=b <"qjfrMZ93K0PM %`Қ&0⬱~٫#aaŐBN Zѳ;F@,)|5G5Ux:)hģ2ږlƍInW>w'=/yqYzxp$xfi799DS5 U}vg'6U"Dll"$6dQ<$U5aTc䑪(tZE̲c.ـE3eR@h FJ-zB<љH6۾dzMI7eK7w1N-SLvg 8zјqK^'ZElFttA w_J~+_gt" h׼l}+g4~m jn@_U2DgTX{/ř+7/ Kac\2_5/]-*_ Ϡ͘ʶ-~EHxO}SmVLm0y]+v20_qzvAj nœ=@ SHRH\1);6(=dlB+6&Sy1ܱTŸ&߮W%~B_=TGQTwE  )Lb_(Y!˱r̶fFt(Ӽu ϒȀW*Ӊ0WdEw f}i'A<<ΰ)>5iQ:'dF-:RоۢP)ڈ5w}+$Mb| C K}:C`@Wl'`rq>#¢~5H8"F>a9brQ&qD= Su[رN|L8G9 bbѨk1BKF )4VYAFΓzSe=0r=Y3HS#GӟIB!DeEV$x2k_cwKq&x^F͆g/9t7,q yLꑿ}b3W_r6p,=J)'SSNkT'&k*Rw<5>eOm`Yc.eqp1LH(}5 <( uGp3:ă͟)#x'oZ)qH ߪʢl1̍1e>@GsGwGwG~5QE80թߠWu`-G-8 :07 WcRa87L썣9䂧kO+s8IR36/؛6͍Ync.q$y!=8JrF?LӭpDSI+|Yd0pw;d 6"ɞd$;-eKj`ȭf=>~Eǎ{ګ|JK#&e,S꣒ZjǽЧ8%02AH BhpRN 똻izA@}09EhjtZWLD{u:f](l3sh^ynF:x3۷ήYՁ\Ug5GV̾GH~#6zݟј sŧտ>;ys00{v yf[}XjJ yN2i(`01ZCMӻ?tiE!;0([ӍI{:V(wQڱ%9"UaqDǦֈ%Dэo׶ݻ< >F(B]gK 5x\}%BݧBHFRs0Gc逹RB9*ԮP|]vZ*!S^gxiRJC@P vX< ji;&Uj׍oZZBkש)'jGLr]'Ts݇| [hMBd=gY+@`.!$1r!ɉ;+MfBaՐ`Jc&1I,م& 9آ@~bqOɘat|$if_wZ̙R!IBc(Vg9譐2D*mJgh@iVJ+M xW5Inc5?ZS=F!ntF%@JOrxlAq9Jl\bF▐px&/y#sQdȐ}8'%ndSKK)k}17юج -IdNVV{ p86dCطl93zu(ېt# JpQf2DE Т%[HkJ =D lȾ16Kи&`qcJ{ i\J@g/Y?Ot"2.O;JDF,9(G\ 0JF$H,QL=f M7ƑS-dlsN! Tx3FT:xl0m1C4#;$LYz?1>2h")ykKX8owҦ@.D-}jsd24룴 ={*a^kmcT>)Й }& HVDi ^Bl%ln y[$`m' p+ˢ4_D18Gb5='nxTyLK")LdL@֞|= M66D)"4" ^7کDNr%c^xrXm)`)#0nI7F '@!EQ$!x%KƸ֩3?DBjMH6͸}chD%ZnݱCW[~ǎE}z߬#fǕfVȷHDaA";1"Ed\M z?QS.$9Mi xxyzp)3EkI2Pdj`Z Wk T!HbMAhGarITTD|,#\Y{Qc!G$}c_ >5P$x[xs4`%` boݎ&INm$_.o=Kn(wӓ{QX% wGOAT̥f^ 󻐒%8xnKo*e̋yщohŶnG_g=NYnx#ɲkgH}Ok4"h B0n;qOu9,JG5M֖[q4lתf|7VZ;TБxedU**%5NpwZm\+Hk.[e@u ᅬ!80u aTP'ŹVu !t4J7NZҀ!ѣDC%Ԑh (aX,Cikʽ 7S <*_C7 Xc2;q>ٜJE]gF>1rbE(WV-X+c]}Jiد `#׳F}/$#u TTD#XDɴ57![,[=]϶&Τͻ;ŇS|~jI_.3bdUE5饑Sf1Lp1WbA tk ׸[]=UFsA3mun4e<-f=h)[3z jw3%5WiFl)j)EةAm jB&2BIJ0gVL3[A̴Jȗn穝 -+f.֛J*[}eVjRцn.tD7ԮR|Ckˈ]vF;]yG:E̳,H)jJ>ӻPn~1><>[! 22(JEEMΫT󑾆jY F&RxDCQ" .cpQp0rH1iL&F'1RLXZ=$yc&EiU!Zm9` 2^Ha qgUqHps&t <֖ûdӯs4balee-&=sF<6_n6=&(b$JS M@|&^Fȧ) ,T̋:߿_7bxqvp|o><>zxQ |i!(rXb3m3oyve _JG5|XZ-N $ײ/5|7}_@wptF)MR1K87R 3UtD/ǔDЖ-ȖN -Z 1cPȧSQF1aϖӺw+e w-_T#yWyY0)1pR#ipW%I[ ZjO4H#VZ)EJ"x]4@\939$~peVLҕ&9?DoS?.%G?I\]F?)5@SDhP^kLHpׇ~x|:hnmO7z h樮 jԕdITۯ~U?u-;m;Sא1kRKY"u8U5aGCbHsvK_\>V{j/ނuŤ Xu9zYqhf1̶h­?IS,bTF&<]Xks E[C+8H,ʭDH12 -'RlsOtAkb8/-#}c0fcS_20!# W!*V`]mi| tquTòf#C!. -y-Z6Ml:xipyYַz >ȺpYڻ1N1Y5x1 gk DJ4\NT uVhI u2:>9hyl7h%=ΎpDǮsXl/ۤfEw_n*j7^O+]ndɷ?14A}z47 a|F ]rDQ|<=6ulyM1iv \9c?˟^k".7n'Jר`φ'![|! 22U)FBv\C^E͟n _-p3q* R`e2$MT$cf גgod=K-h~ - WpZ;{}Jt㖽ukw4q\gLŧgQ#6mO,{=bB*Cgi8w3nØ _ t2rEOhѭQ j&LBvp5t!}?^͂)=m+V}\l޾ ~">Hn?7- y:Y V2CgA!!2t Vf[Mw_&y½x['\ J_Tb)  `ie\N}yi}x_;7+܍9 hbO!B dCKJ \ҊvLvy.tǹ{jɶfW7Wz4̃y|x$M!/w3ƴfR5"'W Z3G݆,Vm:x7}2jLW8do9hxDkp)}1KDQEIEp>WZ`[!'Lp"hnZ`K7HI pSs_CEw<`>exTѕ%߶@q:%DG,xm #q2*eZx˝VI2z)1qbQØSb?ƠRcyI)a1u#5PpQ FYij-hi3/'$q^#|٠1>Vʛ(1!Bt J ǐ"VG:"8Hjf1;l(|t~mV1%R-ѵ<95Or[ӣ,,GɤUt(g3XAz5)í%0j0J`l-`)O#f6,ȕ+ ϼd xh09ZU'G-'q K{z^*W鰽jgxi4`@ Ldit^;D1oK& auXz_T6TnP|0B)$N^n xuzC}:9e]0p&4~~OZKyu9 N3R$%l'r biT>,Z2V[;)p:h jŸ_%73 Lw0\iݦ~-o|Ko" րդt?ְʂO(E* ڽSJ_TV9Nuup`ȣzz|#dvQ4\=z tg,?RȻ䄼MEkqa;coڀj*ㄿSћ Wkds?f_ "V0Y'|Eqp^RiO7gCB)Z4JVU5:PNӊ2X3q* qvbUopH#qDjA4}/D&olA@2(pORS/y8R%X|ӽ/rZF*XĄՕҥA"iJ^ ҨF!r^4*- ڤn,mgDCY"M}Lb-ox=m(q}ov}op\ \$RhI#(6'i>8%lmNm;(g6>VRhoXFYly0^E\Q$!XXXR"1uH}pzI( n%N=9ꉤs{#)>ߌh7QqǑXzbVqAj9#LA1bIb3eȶoJ_Tdߔ|u-(؛]z/oZԓ+Vҝl:Zd\=M|Fee`aݗ~V޲F ~cGa8Hû8$>/_ R"x1U!ʇ_0,Gq94=0M./dg}BӁ= nqL@(Te/%X ??&z4 $`y:8^7 p3JWLXY]GK_N6{B {0%A[\3$N:18#ϐ [ٜBcY%q8U)>L(p,wà7x5놉?'01`qUX#qTiQ A]Zz[^@ āRS/X>UTg}/V 'SmJ%B =~B{dI T䌝k *Ƈ?ϔBjtL'3}\Hz(?`1B !XaUYCL1T{)YbqkHǷ:$xN}E58{B)OIE,&o=[A% sz{оxM:*qFOœzɤF)"k_mq ʹv10/TX ;:?F@ Ai/*Jk/}ά,DLEk,겋W`r9 ( $"&pTc"0 z.Whē2 L ~L &ak^Ro@hx`ajBHͬ2&m LV[l6"HKT`B5?.z&viNV|`'R:qfgnk⁤Nێ?=HO792ܓe,Ζurqv*y-rfs!{ L$]]ڝC7V[]3u;sKoҁ@L[* '/O' ^awwaѳ!e%և/NZqw<#b/P'7LϟL95>3!6ڶyu)͗h#D0=I[v ÜWrFzݒ)GQ) p® P؀ A6f`'ϜT%i@ZuT-/0($;`Yrк2޹Y iנI2]lY7vw/f/"crK嚵q6(U\eDib!1'5.n>,MÕ>iqXG"4fsbkfM`\%+RZ!)"q!/z\\B2}hB*$Z<\򲶼B;qT k߲b*Wn]1j=rIe3*" 1eԁ`:1!v)U$ TEC$?X`W:so^=;({d=ҳXX <n߿I(ɠ~b웨(F6-RE b~<4]yk}޵;3zCGJR_BϋBb@0t;(HfɑI$MFQyI(ƶҤd,7R :&bWQ+ńU\b IbO x,0d"MȔυ!Ȟj\l  ePaǁy1q,j ;46IQQǗuf6wb%\zW30ʺg1Uv)5ļ{ƴL}*-&b}Gt|wӳ#kK=(ʢ`RV932J]=p4uuA{zV`LIJk$4H'0KER*f1;q^F 5YfCn7y{+u6SrΑ]Y+ڤ_L:zِȳIky}:|v6\ί3,i%ew9}3O> ߾* ?{%n0_+!p%!{Wxgjw<7OOK%ծmrL"( o&:Ax\ܝa|4z>kOMA"3N~ W0>K2lʓWËAbgZV-|Mj՘I+3+{=XYeHٲ|44bcW6 2z@8 (0>Òz:4&&1c6>h =g0L`Wd|esHikrl >KǓd6dm0*}>dX(VYr\})//p{7\ ,%/ ld!.U?oI h -L=_U~Qd3p$GцK:rDy\E_ Up-){ݘI]b0Lf;$@Ի 51Mj&ڊ2hqG@őDoiCZ|Dq7x+-17r#!S{'Bk_<V\d  58oӝ%)vQu(GR1!8K|+YΎf:mɽuՙ3R8mQxv++aKl :h?kgE-奕7EӖ?t/&=x9ɸA㻬 (g92X i7B72Rf&[,Zo^>ߝYnc^]d&%]~FRsNqBmS74Y^P'uzCeXGpek\2y]ޞ>pLe"%JEZR*W*[({ED΄"ɪFQ~>zT\fy9y1[o._DZ 5JS&&?~,b}m"8sD# ɂPdq&,zf쒑= GrJi|+8 H4_ZL\X I|U#B7u!GJҸW7~$GnqG4~$Hs˛E2ؓTE gG(IERd%v/ŀ5Y2ceY qLA3&%lńlR]J2k@tI T9%!BXp2S׾| =F)!;J>$…RμP $^8S#oSX,^݄TXwdI>N0Ǝ!̹Gla<"IhJ#_, cyrWռ4#KbY61p I3~ȇM;;Q]cYqNdpq#^$F~R\k%+Υ$vEqGIfK.+%{JUɔ[ O4_W{Tc~ C:1zc,Vj͚,TAP Kڸ=7@L4Ͷ\!hu n-_X!mZ:5k-+o[RǼ5TJ1Nb8F7эctQ8F7эccu8FKE%p%"R>= SJ #p.4rehNvV[Y#sYDi7 B! AQi lX4,a ~zɆAxӿ "I0!ӱ=&Ci#qĕ$H%ɦwh26YrI}) d#aj#9&7r;%ZƕiY ۮB'j10n"׋pN"Ff>|+r#jLē [-'i5M?#AG#'M2LJ.$jg%Ezܡ*h48cZg(z=q ' cf!N8F=+[ *d/lR3?j?&d$y9\2pGBpLG ֞#p 1 .͓qm<WqXWy^Eb7QOSNpNb'.EzI');#\Z+>/7\u,j8QdRάL*!X n>xvM;H/nii+}n߻DTlc%frsKʩVQ-> ҒTqic=8cVx^pPhNLHAGD'_5IQCr(͓e lÙ_T׫݃ʗZTNR˰֡V {`5N"ǴyW˸+T(h]\wEZph4P#,Q9!8lct3SӜc(w0IJcvPF)3֨cx89vtL c%Mԓ%B\dZ}gPj|~|G>ÿ`7T!WEӨ8871Q's1t aL=_WTA@h_?*MZ8?"8xg:¡#cJ)103!3VgQq'Ƀ FIKMj)ts5?u4e+z*&wb&5:cq_.qY{#or&/o/O/ B)Yl2`)W CRy%Bgj}U?.ZMyXs8bnab#Ϝ\[s,N)K;]?/>{ | =~'OO`L1NR+()!8x|~e^5K$J 20С~l*ڏ1Z#˕KE 2PE"-b^rsKJ'`{H4]!56> 'FÛi IOYI*\6 ta"atRc< b;DA0]HA6,zoRmFݷjV2hɒ5~G*{r( J^2Xv^6yɶ[M1KT2]ۂ?0-| L 8+ޕ(N Br]q^ BYH{-n[*ι+hǬifרanP_^c7W׸  VlTH3dzGvP:/rDipqMNac޾1d[:]M-5ޡYTk˟yaұR!r+Wt)+¾a50 akYuO+@t5 1ŧ S|yrLYپBP ;µ*XsO?lw*vيYg]m6"%9z:n3nx{$u#wtب?{,<ʊEHlV'sTJVtQE( !ilRb+eF@xmTE$ok;||wikZ2sRv6Qw1e Ʃ*%E JbI U J{ZY9}2rBFBWT3x4D[Z0X0cHa,J(1SDV`5-02no5w:u3=3CR]wrNHPͫP XVKi13&#:l0H,"g7|*qW9AetޕDc AGfQ蚣 @+/i!7WRH%pwVCgmnuu+W*3V(GgMTT*fZt9LӟCMPMv80]jc䒎 by3{i ~gB@tO4T%]8QssIo &߮8.gp\4^ۋi>՝2}+z}0Wl^.} TDa.l6;?|6緟}dq/Nÿ|eh(^DV@XIx;zAylr$z[hmd5 Zev{&6ZC0kFlPV@r<,myklRmل&OwF@yPdA'"'RK K[5XYӶඃz*&d8obuzONtu8fқ^ib "OՈ9 +2.j/#C_ Y`TńHuҞ[H -hzZu3b)pB*Ѹ;h*aLvNHeI5E4B[~so ۺ{Vrݕ Nΐ䮉ea2g[ar7ZM,f6_xex29yJ0'UwgnQ X C؅+IPyE̳K>(m +K HØRF[PTk^"7Vr8gBllZKN VvrYq24u#ژnрGR3>Tv^5' cM^Cڏ(ذ3S}ͮ2x#`mn< *}Sۚpkv&CP- gJ+AE'Rjw K` CҪRPǃh"tԕ]X}LҎ]Ka[b%U%<t{LP-|y&Ż>@+? Ԃ[Zb|3]˫e t GC$LW;ءWX)U)m!tNs m0-e!He°Z rܹ1!s0⧸ZО^UJ4 3QCYl8 3|DTU騍+oHcs CrOX}S@Vs%; 5Aƚq^N-D20UETۥPeM^=[qI=G{ Jmjҽy:5JG.bGju_g X~aHYk^%祊"X/jϙH_$\1௖Y,`Z᝭)'?e"ߋgGL?-Q-kj+Dg^D6Sڙr%5U]+%:h*X{yD1hc5xc>,)K(uk}nF*"4ުեlmTɗۭ6wǒwWk iiDrD g8]D05F7O^<Lc"0vi\YUrh)zm2zzt)OMjocPˡ9 ߚS] k49|,7͗7K 5i NKS p]$yI鼲efǫiȆ۳:WUe$j_~q.wzE">]޽N,$^IB*krqm8Eή Ik'~6_\S 1 Tth/81lԐ&H@S֔D\DF3Qx E1( o2 N│)qaL,'+ ^IduJ)J5dt ϚQjY0CIK["Ap2X; #%KqO1j7߭X.w5;FbG;N͆SJM*bԤdDt׎q e;DWp -mO Xw]|zpOrD9[+áx(J ̽~ jyoJy 8Zek dH-yo PzRS~ʘ}TS=r2O#kE,tXpZ#眂`ҠmH)D bKOr^OR)@20G A$os5o∹V ku%DT*J)d2)>%!BNƧOHڰ@SPJl$MbI◿^&..W[gnv#`KqoepFF1?D1"J8au8Tn,Zn+Wg_cYZϟSsz s3n.T˥}%Q{ki4B@1L 8V%$Ar7"RviT56flB] U#Cm)]|4z\Lܓ r+FA".-"F-,yڢ}-pr F PR 8j\wTex`2s0T";˚z9¬PMyDӥjKMvR:l{WϹ܋igGYQV^>[-j*rs*^=d!B{o"Z'U: +I/] ;B20 }c>!1c!cʱC(JAuC&,P+X{,/, 1WpVk-2uBPaf .Y=Y^L&Nf | *TD%I/qgjlHBk'9DMO*#KOj?>1`ܶ߳G`RURqw2^M3TivsuvrtvTD_lPnrCގK]:CΫC҃܁EECZX'°v0[Cd< d$8JL9O=5VPP3lTVlK((9- ܞ~Dl ֊JЖ@pً lZv !CgZks_5SNj2W o "(ժ,{t%c zXiõ4 `Kl<_MqHI16Bq;lKG-cJ)-7 v~ĕ~O҇WľZ,LX~+o𕽽Q em/G֡r'/#*b:ZӦX@[;pc!7&-.mml8ֳvM?n3lx3G1*;h){Oٮs ^|ORa2@00PKE6 dܫ4~?nołПx]Ũa94E$5ӊH"Q m@"i % }4qu{s@1QS}}܃PPZ赸1bx\j0.Y *&h(м0\.6#+;kh!==gĈLIEԂvqɪM E+4ۂ 0}励V0SE &KXT$y%c W#KE4(!JmYfh9 %Z]&HFA]niP3kw}٦<-T+J8Ѻ|݅{WW x,fGp %E^k=zjUԽVpcEG=OCk\:`ſB> AV@77{`[>+ftc{͹!wzs0 GݧCcsL 2{6ο|ZWv|ׅ1M̓v|be}(Kch ' AB)؊[wCAX"q@f"ԑ7$H|h -JAe,rW-ޡ3a&?z:8͒?Sha%c@%e}d cj0l'w!Ae3&45iʾ'D sK?Q\M:G SeBSl -۟-(aɚ\86eɵb5Pݿ -yhk)ct"dNC0SNr*iNU9|_*wN+#fg {v܎bYK!Ȟe,kdB3BPdtTZG G:I aPf9g%Y I2ӛF u(YVZg$yd78r7xw7W7wӰtNӇϝy/.mo^n_ټ>geErc4Iy0!:9Mj*~dv1iWg}<1YhgBrRE 8 r“dM.9m[ς-}p\vujZEkf=MΘd f!Ih.u`TBbq=_(y6PL⨺K\E퐓*GSPLCuО{@#ǁ WsĆ脡j{8ɴz.i:\>-ll_3:v k =M a'ĆLtQ@jN'̓NJHa&)@vc75[72sR@) $[S '乮$Ng6ξc㢶~;'jNeg0Rv`94'ʹz8WWeSW0hzvW⻛b.QG6#]KcŽGtbwܰvb[D|T%Lf)FﹳZ)#e!zȮ@+G)eoΥ9s]3ѡu[J-\C7IżGjc>Wad¤ﭧ6h"@oc`N _3B*OlskξETo׻Zaȡx9dmc77bˉՠAU x9+ٴȧ1N(ݻv~yqMpX%&j~k}_4z< ;=[3XHVVį_KgU9oM]b:.S:2Κ暙I!tT=}߳?:V^O=Ӽ\CLq}?pM8.S=rv)8#c/VG翟i% /я*vt8۷ GA NV`ws5!/Mb2xP2R%ϕCWIi-OpV4w;%"ʠU1t7PS=SqᲒ_\^\"eq+i6_i-%YXL`DR5w XRS;_0pIs'nfWvz^ђǾ|R$/QϿ%:~7U%p=C5bκ6YӸ1א"~ܜqjҘͪ}oF'%H.$V@8h0,$La T.QFo64ޓ6n,WyE(} 0X,2b_fa4MvdKOG& }IZj])cHbˤT.rT`Mu1f(1JM 'IlIjVǥ&zA|GXӮ/qgG1#$iɂ22W״Κj˙)&&v@wL9]I?jQңqT- zs Ow(wwBn4Ո[Ta&jdx=N ct~6E =Bk_[}R 4ؒ"p R&90Ɔi IB s";7%Ȳ:KKaĤ&E `sDqjH:8#mj N#`Mc%M3hj]-'!Պb}#@)v_5"'|L*ͫصXr'Lx :3Gp40 h@4)L֔JyijAI5SIu.W+5Pa]]$.+@w'kLq^XGJ˼{pVsrDeW,E߃o5n{yr bl9U̟-zwD̰jyҴ:6y2sl?&9yR{ #8SI4>OFڲpW.udJb{@n}nuiPFtQE]mVѲڭ y"ZK}w8BjҠaF%jQ)anu-5 !\Dkýh7qY[]kTnyZn-ݚW.dJgSڭ& JN5*iO9in H+Z2EK;YvK2Sw*ڭCJFݝ*FvkBB^ɔ.'(LOX,+.1)#$8-j=ˌ l1싰G I|[,S4A$1|=IbrDR *䚮%*I$%acrP:cvtJN"~|SAk:& yvwl1MztbIhɍIAP׏!5D)Hl(dVsEY?r|3j`4+S3yX>i=vi=`VZ&ZҪH^a!`rـ[WگRІAaYbDX0cp*<9#VH 4f89WҘ^ 'A\}̝#\z{R%t5];+NоPۘ8 KՕ:PˠwWaD,6g=()JD%)_ph9su:U_"2XkYJ *mζUa׬mWdL ذF|\fZR+RjKeW&5]Zt.4I?WTm3@yb,`ogqN4j Ow=1#eY 8r:nkf`ZЛQQ+i)Mڋ_,L*\.i>Xa(ɼ&& V|ץ2ٕ'T@QtೋX3RJ#B*--"2e X$@J^ر5D;:v&1J#6Q ٱ{Q7 y{~t[#zr$:cVD &&KBLNM0gz&d3-Q)9;fubd#Rgc9I%:el, ZB8?2oN&8jc_TH[LSԡDjS鮂B\Pr끵zUBE6DƢ @sU[,JE%FZtZIo=|g5J:qBv)V.E EQmq9b%O8g8Q^w\S}*H_6+>)9X24/ӇR5Nw+o"3_o6e&G5gf3{3* -Rn1r48gD1S6"XSA&L}/ɗug`2!Jj(SB)89Kp&.8Ό793bG^|)ɤъl%gFc~ykl18b۝ }\`Sf~[F.ŧRbݛҏ7u' h]6r4O`­3RAPu&>gѥz̓`kwܯ_&O?3%3wDЊ} /˖wRkRu2YẺ[6lDVsS`][kp',X.[m$'T?Jʴjw5dZQ ((uDG3CTG iE4uK)oEr!ϖЛ*@=&$x7 [Z*DwCߜY}}-װ]KԆk s2ӵY`L#2VYz9}ʁ{ Rxmo!H2xV1vӬj;j;{ {ڣg*U} u/ AXUW;;uih0iZVq_Yq&zٝ;O4S=1BkӯSV@uBtFIu[RK6#ZֹZկ QήxMjs77J[nJZSYWoXC+[%k+ƷJ/֊LUqҢqzV%Xmj]25p!Fu:Vl_dJ$ M68cM)"kZ"&%X"_&QڷekÄ0K$pSFPH$TԓxF2ˡf3{@5D}eVQr1{-FEP1JޓE棵 Kvl߱`/~43UZOi?u/.WQA5r=Bo'3>ʆH9[{ e.Ǖd)oތ]~Y%oIrIwژnYY44mFB^ɔ()SbLDw10JǺJƵ[U\0T^^)QF$AG=o4Y&lJLPsw{jް RJӦ d!G&vG_ͲddQkG/󭰓yTgl|,xw~cҘ4ⱵTFq)#u tzG'%X?4}'Ӓ|lbR> >ܖ8y}g/xsz3Z?~y1]>\ķ[r.`Q4(n)_FX#㸅T1YNrBUk\S%QcI ~%8_{Œ̷J$O,*@"y\bt=bbj\8eypOQ2_¦Q*>%ǔ "Z9%|A[ƟF*y_S{9\ L)$Q^Em1.V.pFτ|0K|@)&&%g9׊q1mG|Ȓ_RJ0.[3B8nnoS^r(c)$u_*z`oa[$ʽ% ̓+ą,Wo?ҽU&LJ9rz-@6Sf2@LJo5D2Ҋ?>E[66֗zr!D[Ak\|ʫ9;=pkJKZwf#+RڤpsO]8)85P%__{^r!'`%H F997 4I!* Uǀ_~m\:9vzبuc1 P~|i[` +t$ePpe'ƜEe7n'շ_-K ]oFW};8.5 _.AX<]RR$R҉"(,"gvvg~;/GS܍1 G;Uy9F:rGכ}|m,-Sַ(OZRz(h$\B[~aM ^|{]@ jc}S 7&]sٛ.G^B9o~"bV?'Bm\U#Ʉ [o{n* n{d.m)/A DS5um<TfO O(HBHώ[C~''5HDŽ *xtY -hbJ7.)dFKLkZb6vntU!C`?cU }[6ij͒Vj%?S Z+\YHɆє.ǹx4G?@ỉp6Kn3 ]bc7"PlqgM슙|\U d& r(TC9@`-/KȺX)|vm{WcƨkWx^m p !`m;I+ >m^!I0N+Whz{<HDKSd"$ @[Ma@1f"l{'~4eXd2&d; \B"GI4Kp7T9uw((7Ad^NC!-@/A1#.6Gɽծ@XtjB| H6(&:N63 (c *38M4I-&Ӎc?|_dJow,B 2n,=GΦ@9Q.]; hi#ΰTC|3I폧(>*O8& NxX*2he%;d *+5:M>9rd"+X  %eES51t,bG WS]@=a<2p6+TLI  c[;d6HRv¾F+XR'ôXc3%\@PcIiu3O9=_l`tshAeldOm^{E0!26:R!p<:T g FtU+]3$NkWYY,67x ,&Nw6TH#(mgrJ/d0oEYI jH- c(\DJ46Vjx GCl ð肽\`! yÇ}x9 %-ߘ++,ޮG XUWPgJFNץe*6Pb0W~0V_ZLu@sP .Za^) 9 GpCOPA -~ƑSr ͇Lq|dy- -m>drEUw6@EtY v< pLJF1D!, W]2P- C-m!tQuxX˫hM9gѸ0-.VIަ̺FGo MvQmʕcGj}UvMI/ߝ)!-.r9qк$lqu1EɸU-{%0J&LkX; 0 VfѭvE(u LiXh9OJ;ߩ%J9e͉ Sf5iqƤ@Ku{Wn& ICRb ޲u3 KZO.:$d-`.˱`Kdم5,$I%n:F?/QYR!Ǻ-Ҩf}9 T1Br0$YѺ/XGC5 WL r<SQYvWkLIʁekKg31/4e]HhVCK^ ~:o-۲Xr hH;#gM)zkN(ce8B]l`t<`JRb'AI󆳋0n~"ӏ.nō+W`(F_nHlZ10$"7[oe7qS:-N'᪟nuZ ٻk0/.mY#*;0'P&j˛ǓOfT%o7;Qfb.z<vUJQqԠc椂~aB.)V+U7 FAFpt\.(R^ n0ﮀH#pv~;_G`pDn67Ǭx1)O¿J6gmob5a@ڂ/cRtppclAq1މ4vt2V ep3;G꾩wOG~Sꆂ \j~cK>[z!EԯA>;,>@QP2 nTWJ Nr<Mg=(-.Z t#95v~g2>MfdIf 0fao9f5k t{{.]@9Gb:6g> WYDQ|$1 =gՉ!A1~; `DVh6/YO~5ȦϽsnݶ A@ݺ~4VR?)SP^>JGVjKZ[V7F5g p{M^"Gzh3 K4(sɇ$Vpx}W1M^]a7zI{ < sgzBս]4{z Jc|Fuf@|Cz.1y ~"EB>Pp1$pR9ސ2%) G~ %,:0Cxv5˽(_k|r3z\Aiy%GIQ(IJT\_8Y8NnuxzP({j-޾9~]DYz:fi?2Ii (">2!{oG&:&{Qz{a6?-?sW^_\JC"Z/___.?^}s^wTVևeoA~wo1ɟɨuL+8璏8L̓zŒ8 c^&JOJ(- zWaܻ+s(%3K)x27TNlQOW93?>̭V,Ip+Կ)_xwP9^jWwK}7EN"K5WIp F9!s:|tKJC${\WLfCoh<4:[z6u!^g[gL ~ ‡plpfjԤ%U_BuO.L3U9m֞JR_i 2|Cm6߿l΍XjwVڝvg;]j7T3 ~7`Nf!Y]ڞ{9~g;ߙ fSd\kf*s«^|7էK87p62kgc}6Ow+z>'R9 =+5cX鵩?gF>uҐkׇY%0򗎏ᐔZ8UaЊH F 8J@JQlP*;)஖}qD_0!lB>R-ȯD/$j'v+H=K p%1Ϙ%ZQ(s]+S5djPWG;3]KV#_RPKS9RxYİX̏J$Ip1XP*2OsDM$x<Fӄ2un@b=T!R[a K8 X=KeƝdDr*ѹu]:*F $ (j'5XB}j}3<N؎w * UnynZ,Uk7@1 = NF!?FZ{740D"w롦ʭk)9X}ݐq hN_q"f*oLǦ=5"ҭFScG'Or\ɸD{ԑtmu*۵YZ^崹>Cf!Xf~$_p&m⭪uQFi Yk=΢k ?_* Y@x2B6`k:s̾:?!C 4`2aפ3i O׏ >(5<A H CGA.c<%aPO`=Ǐ>fzQ58dT+Ԙ❵+ ujFuf3~g;ߙS+ ՌpΏU3PU5ZC]ͨfTW3UfT !8 C٤]MU NeW~rB9hc!!&V̐4O('9Fݝ]>uJtq@9akd{Pj/#qxTbȶG/&ܜЛA/͗7;/Ԍ>w}.;RbgGԆv I3,x beRbtmzW5Jr5l)9` #Z0 D)}༺Mxf9ƚF09TQ@l~5Zs-B^k_TLb~5gUeg=Z7Nߍ?-vNl}xu`v7_]b^h팮<A7`0O32sj`ٳ^mbN603&!e1=I\oi]]‚yJ3`)绶xwDds G̦hD:R (W4L f&rqke؁ Ƿ-էaY÷M(s tKzӭ˽>EתkR)C)h2LNdei݊qէ\S.A\%tYIrmy U_Jץ$V1t^2PG!:kk97L'XN%7ri7Qb-uj1M5s5n6j۹N_I+ fP'de2iUڌj(aoyiYgQl;9_ lh摄 ,Ӌ:t2)7i/3x&2gghpmf^I͞{6;fMɽ6oֈ{Kwf֛Hitʣ- |{xE,?_~'JTm`fH.P+6Y֓X!PFi%(D0'^$FXoz-mJ_֝OFpfE$vX\ MsdL#x{ĜH0Ji?dyP1 u}HFc?Ԍ-2.Ā8v#I]OıˀcpQqŒ)0|)6aL9Ɣ'Z*Md&k7YMnvқ6iIMo~ߤJ޴7}8ћT6Л zӆ޴7mMmMWWR,aoD s,9s?:!Ğ[z,mtUYvU9H:6lY{`)B,sK!1D}h0HMհή/\%X, P+~y+0PuQny==l; Lp߸90hd .ǒ BQ\&C2׋iR0"0g %N sH H1F,1)dJ(!(")rki@:mNjXe^U_Ӷ`2ס&Dj}  )'%Df<׏w8`8Əڏx~XU 2 ؄}8@& dBL#ţH1ͅ,B)3?)>4JT 2, kC)HY cĴR+žYp,'~}0Λ𸰪>\#UMWQi~R|cc,bۚWYrWVWY;Ǻ|R X #z r#B(+cXpw_ľ+Y㑙@Daft;Xpdԑɰ++#u9 nD{0ZރWʊBT<1kR5hCJRaH7viPpl]¸&~2X}UTs񶗮݊ۻ9Z)ۢ2"-zΎEҵݨߢT(Pa>Zv9(S')Q+ޏT|j?A UX+ՃAUA*ZV+Uǥ{H^*U,>|p@2(A(/jEc%a>ҡ=!cݥ\ꐅ7#? 8_ fFg 坧eD!MUcw!&" Q*|j9-I4Zy}0#1#f ̿L1Θ z}\6tί?D[Ieb$YʺTLʭOPIKzh܏/{v^z'! sXxeQ(RnP%6w Ac"MEםrvKGbdF!I}f{ʆߓ$ {gH /˸p@۾@,RB;XfjVVӒXJMZYo - ޣGU|l)~:Qn;5 Bm `qz*m@j+Ej_A5hL;l t,nN4OlC րGWv\c5 RYk^Pj{L QzB1ي}'ME~ E0Ș}M?=l-jWE1S%g;jՈfJ;5I쵢㍜fht<C5kD9-8;43"ecfN$6`]XVl$JV-53gz)#zA<^E`:앧i5Z-mj%^&]6a7fhpmt~nozw|ƝA̛{VykQv-O oS.ብa\c1mB+{ Fsr671h57Ki8vb?0oݎt46#424Ta4 cZ䀅 aq_ዚ#ko@zy c~yB!0)h(B:aJZkC6hw048F6YjvjV`wZ]!!}wE }&:[~0+8q~ M#X]B38 nUT-Hᤛ}|HviH a x҅1/ce۫8AhzZgp4x_w/7o/jGgV[ݏ߿¢Zm@mXl3/D?-\j:$- o i'tj$EM&ox2L պLN ~K$i2fbrSa eQdwIcMbs ВvZ> J8:5EiuFi2^ F͌ǙJ_46mЋG27u8,z2Ql wI׮&A&_MS6l|;Yh!K < ^q3Yo&40 Lh`Bp0|C#C,4I.W̗ z4У hG=a =dkг@ e: yyLS 7Zsܶ㌇.xe3^CIesn.8G7mYq H-erf蛏):]ATYTIRf<%5@Y_ pA3ZI+ r4$,ppSlD 4:N[ϋV.1۲- lUP'Xnݲ[60Gw ID寶GeuԷ^I,߷*-YMGcAnlȣⱔaa8 "8F  V2b=fK,{6LEkIX.'w&#ƾ/bG:5Q4FF4!L-ұ;бQ Dc1Wq0jT,LD u@) Cp\LZ:挬œn9ZfL&UC#D=,=fza1>4b܆`'>GʃcB022 B/TEI1 \H,gzCJ%IFVp!Vx[ &;Gn3ӣa7{9F |'.w^ g)azUWmA{E;g](sחbuk$ Io>#Y "\ݥBrΧ-@z6>kMq&d3&7>?̆C(o:oX BTvA)ksE5tB޺΅T)%Sޥʬ|U]VyKNZx9>%\H V}juwmmjU!%)$kWd'/Ip鐢vR߷H&(,zz{zh]P&İ'p=f8blostEl͢TF*u*(a`o#'t؁JQ}~4E}cSs2TJeT14I5h"ᩔ-$Hޓf!:2IiEI u*~>`֌qUY8#GP*;7ePՃlZyk) &p؅ F|Ӣu'"lo"E,3Eq8Ip}3$@($V3U uyWA2'6G)Q ,[mwmF :#x#\~Z~7p[^%D 묛JrOp d|MeJz1,QE"e[0!z]Z:(6-Qf0UBCAloy5Ab&qv$?f40%9D pVFNȅE _'V~,zB #ۦUn=S8 9 Aםdv1N i1+?%)ˋHosگ(: d1zU29;$r/I kXVײNi)X[{ vTިvQcP=͍\P7#uHy0')NcWfPVS .ź*ɕیV^t9H2;ˁX1Z2w=*v=(-Fv 0pe#dzIp!Mh Oh4~ȣA\^귟ɬFe} >/E]zUX kkm:ocU?yXTmͣMal"q26VeIgZ%_0zRsj#go1iJmSָuO%PucZx6fxFXַ N'};6޷ղUܵQ;BuX&yO!pW{Se4MVNGrR<1 `vQvaG:o*);(pWVA!&چV~nǩxQ@b);L)3T6tJ Z6=m\4q $mI+67ՎMC-8\8Z+O nπ\ԺoՍx W<#J8MMAWm 6q T78[Oa4?#cI:33̏:]wGS^"K4̒CuOn|`w5,]+s̜ȵAʣPo|WF?dE(LpԨ DxWT %rw77~PHεPYUgڮ@1z&uǧr;"M_Sz7 41ו^9G8hN$ձ;oߥN8F0O&H|TJxx0𴀊r/x l 6Khr[FP)I:AmQ S7tDi,&oMU5 /׋}7+lNqTF9 D*Q@(fLg,QN+e,<"XhV{}+XBmw. xs_nݧD9p۞u 1h(S=R+w<䑔O^/Y ˂A DRyX'`4BHc6FHtϊ*јu4{D꿟]Η`UjuJqKyӀqJ$HPHi:%lcjA@PV[Vn_[^ZYӮk׃ò&\5쭡r0<ТV{O^Va ]:{=Tk~9/;A29/7`1=60CHRR?`Ʌ sQS/?#5:CKUtte:")։j>R i\|W.|%@"\މ9t1+yז'm (\LD^.#\'JJtØ5s޿Fݖ=WYD?w^$Wfy3<^HU6l1B?|6")A4 >k8BBЄ}1TMТm&^"54a%J1!xD2}W}W5A<PY$YREa$!2f"iX!.s;njWʪ; 1pfON? f?f3;z+ARkҁ-]iuafm7._xDMHFo"f2QQ"pNP d$QfFҌJ.Ǝ -.3bQMf:#WռaːqHqI0Q"Xd:S ڄp *œC ݍ~'s@kzO?7a~a'8_| gp]}aVa/ϮBJٞ6rW4:FRǫBFK>[X'ͅGgAiV7qBh6\" 7R9 kpm\8p#8K$ C:[NaptLÐ۳aQ> _aH{D1H#ѵE%%XcĭZqKL>(XR+l4Qs<| y95'~(^Xy[f%fl|怼Ai8s*@q~:WuXP>kepӔ}g+*aUԫc^@r6*'rzUUwe*uf߃L-:?Ł4_nG]Jp|kNs\;OtG;Y^ƨk \u'qQyhY0:Ͼ7?x4LΞWY+Rb/T}F1rCХy2NKçXcWK/FO+p1/s*M?+(\'U{ Y ʢց' y"ZE(9BEQS$8Wm>@ &DcXIPSʚh|]%Z3\#hTƬCB^.Sk_bs]5B\!ۅ^"Չ׈]m͍t"tYʷgXq/f-fXY Z¼R- 0[F=Q:**uɅNO:o߁* (}M`G>[M T<pXmAd0y&4o+TX!~fH GiQxPMY5!Z5J'%L){ӂ2H`SlRXfcvڻ KbDDoƷ5 /׋}7+r=i \t8R*#_Kǁm4Us~ ʟ.~{_yo">JcV x+@RҊ n"*Lٜ$  "Z4ʈ*I Lw~̢ *mFc-Ԕ2038KbR(Q*,S\¨3,KCx }K98f^D.Z`ʏ\׽s #c5a,0낂͗-osn\Qt08$i47lhʍJKɷ`p2Ft8 p1I,(6sƓii8ԟ*h+,_3k+ H*LaD"KfY8 4Oq$6Qz?&_-J(q#%E I-3D?qZo B~;iLg&/scF[3^*R*HQh2ͪsSNBs)DTqɭW36:oTW#:qd"01@:!3BN'0Jd#A*m4F?{V4LXEGʫˮi ,?ncz] M fX&ԀgÏ~~:STiIhxx5p~x;z(?<@_72 $drUv@MFWT%u5j:UDk9JJ{kJ9;Gq4sŕFDR[3aswv̎f1YORU쥸S%ZcQ 5w^7~Y/YQ>0FYN4<՚H7zj 0]z"s`7sKөA#i0, F롟3V7k Q?μ=}s6o˧;Gq47oETkN֔gq ݩ%spXgz>|o\|{BtT{͑D{虦k1k97We_}nl|U¬FjF?Il9WV7L5!Q;&ɬL|@9 h5ΑR,!rGs*2ɭC4J o5KI?M&ɳ&PT+016Yw?Kω!~uIkp@x B(\9m,@` ѹ$fynK'mz(:@Ԝ;Y!8.WͭV@9a^N@#Y`:]7V|=k+Na CN55TN(.cƥOq*)MMYR!؏ zoa˹?޲5^mO1ʙ^ۉytN1Eldfvc//o my-[16iϓiUD{-RzU[@Y5$!pM)=zWʉ-Q8F-v6c**kTB[$xjqh7a FtRǨywضvK&4V5!!p=Y$B}McI/ s1nR)e7jo[8/6ᓍl^%KY) ܻ _p;x~a|tp_DH* @llh%4lE.H%Z7YITB;\7 zхITΠ̚!U`ӟ̾VAf LAv #9j!D@敄,~ ބ@1o+C ?iycEPuu]1&?4 $]2q,)Pa0덬ŔW+w1`2; M~ݡP ]!NkIZ?)i`Elv"dg<McGGI F(g 9\ K%x[ALs"3lph\{l1 3ZCGl@vx~Zuh8e\J՜͈B8d)õYKŒD:^4f0b@{pԟv֗q-u/1Z|||!t/(R# [n{ۂqT F\ZanM)iEBDۛS!q:̦w僃P%suw?xC_"˱S' >G|҉XʽзD-ˊM'EwMPt7AT]Y/4*`')ePb+G)) )`Ar zSJj֛:tHIGCUxE:Pyc[7cŘN`KYAB?\)+riDT@H9 Mb,gq '", R0(cꁸ#M[ c[ BZv?L @@Ĥ7j@\J@=TJhxdHHQ@r: )_,`ɍn-~yw'U~qcm{Soͧ@ƓO^ wup<0 m^ $` msh1d[ ]6BH;V~n>:Jv\'B%(i:b)4Cip.wIt(8 m)ri' H劻.Vt*(brzzNu^<13B5q0dIzڛ 7AjoTejKCherrCBĆB0F%dsmNJ()+u-`b͆ xsG5z3(F$K].k(R~ iq~rP*{Rln/: 0lQA_TAZu9/- L2?-@|Z"=N3v 71B^*lNɡdo-N, !R٧iݓ}'2hf;im;[֝9Fۛ~s"QT!f 晦g!4{jova*:6TJ1Riꄂ@y# 3Rm4~~+`LVy,#C$A>I#ϻY {,??ܷP,F0q4L."5-yW}d1u0bۍ(,s^Wx>3FRq:caY;] néס 1sDžc~:SSioRr6*🆃Ǯ=?ގP|AF5~OEX*ku8 $3AvB*9a-A؄ytOwi<+M#J!oSkQxs6FF8 Kκ2>{#WT0P̾Vf MAwøOQmgcg8fz~n#~YK)n~P~p7=Kʡu9~(nSkMv9aʂAєuHk~С$\V MR ׿^m) !]>eʱ`x8mQ`f`@ 谦hj)<#Ls3un!/)qQ6/DwXM?. ՗G;N!G?\&aeOwjFaw8z A.c1^8aJ[+eyDa{# V BUbP4Vk/C{Qx,4]b-$"qكIJb5 X%.VASp!&|5Uk # CLD;~n0/[X|3v*D >N>|p`GuBWןz<ˡO2%c_L<3uz1 :qN"s: CV)W~*3dE.vX#W +˙XBS %3QD d=S;=uI'Ht+OMMpvยM[(wRW_{ U/ȇE 1,xxB %VA%%kBBE" Cv+m ߽ӱ 5pf<ͳOf"pzvѰ2H,VC2\d*ʥ9 T],P=H̅TAj)h !aHDCM V8^S$lIL& ?/IbB1fC19HJL429x .*ACMGo]j+!E.J Аۣ XuJ)&ZSW\&{l.%ûce.JHd#bTS8+D#Fq b1JflD!jDjި) [RLn7d(qGR*+Sdgz8+^؀r߇=,g_vNv_V+c߿!%D՜ #aOuuОW:!)8pk.xFY{Rb8: ?N_) wF zUE;( uOv/Ef{.D?Jaf9H<$<~|ٚ{\K u[s֨Kz|k.~(OWOߠ WpzekNiG.?W제;FF34q-6Nb3n,/h7pӸWmOۇZTR)^igO[?=8woPi!jx#HFǬ@Zk0"v3nPqe!u0qۮW$0Z>nh+?CƘC{@$ǵqÍ1hD$$#ҭ b՟#T\y#gz>:Nm6{sϞ*$0\xGF1a,NF;+o}|9CL AJ>\dd ON>\@fp]WGLn}iTyhSf7#7_ۓO[Ke\˙ߺIxJYNjfF%fV9NNPsV'54=^f٣94hv yYOE#49ʾ_sft;J)YhqNk[>7gwLYܜeo)ZZ]ss1asXAr,FY^خ3fGǹY^= 3{fz,REf<~c&qC(M3 eADG2F n=K<` O5ޢ_./a}38⫛eh9sUKxܤ5cf%Bs p&!pmH #<Bj{E杇ͪǸW$tD5S+r8C)= 9!%4ZȾЎ.4ɑ1DG ME#v`0aJX4!UI 攡ix&BsF`LYDZdkXPRp~[um v-aϠlPXDb֊13Kލ͑qN?3ņ6GRET  f3qE2uf% Q4:" \SԐ"I!pCo%Nac;V2CgA!!pFr0C SCsh$P\3ٓ`Qp%DF93X b%sQ p;%"#j2Xpny{,hҢg|˒H]SEŧ7W3 ji/yxVs}뛧gf$lOp v \{=ycGωQD4})d+s~urDs 3wGx^mo'G)WgֿsfWm~rƏ7|஽% 8ϩRߞ|0;j$h]O! vɴfDHSfv% KkiLJ^)83×;C5 e;VP4bdyp`\ڭN5dVH@8‹GߵLُ:~;_] lLE(܄|K-$܀ ĖCK5GHDda5%MiL5XN1)I4P30,QRP#XYf8i`pOVXWM؍-,EH^%r]񺟇P:ۮz-ijw K[A=H92~V$ZPL&a=66NN^-׮ y.N۠`0QZ 1(8QP)ZƤ `הlץi4"z3"#قahZEPƳqeU%- X4Xut:}CdaWydF`ȹd""s 8S.g.'LkdԴ_*̤$"%Y)ɲHIE=`Rq)Q&G3e.%;,0cLIwP$OJ-|Q٤—fdCm7sm:|kY.eH"YeY@5 1wIK#S(&ibo80P5rOEE\,aڼʵ9uMMS׿}i&[t5(#?`p8ۨt:ASFayy6Z4#4JRkfջKNP $ xz؄< wO6r/Jp();Hn5)Bk2&I&v-8qz( ^AԀF @`A]P/X;}_q+[HT3:sibBc:>#9\Ιtn$s,1 IaehG(>",4&C@,ڙ~o3](c,#l10@2jR/%RU=G~DLsZNjBZZr:Lz* _6R\7nvwvAT1s1ޫ9EMړ33܈9 3v66wAL?Ir%IgCGC'KOqqOn#DO$FJL'Q:Gealȫ݃*M~ VR'pi%ƐJ%=C޳zRC!9OgR*(2> WN1h hE*2RLZa8i4z=r2FE# A>5^ T8 I iN1+5n{ ~s16۳Š#z0%KJi嘡_6ѱBFupP{PqL(p,LRDn"I`^Xw ]e)@$WEMDsG1H7P5AB8hNEUQJE뼹Z "ͺeG%|ըu3r[t}LJۥN1:Mcaj鯼x G?bf?(^mӋϻIAg'0,R+hvfWmy98g4ݥ\$\rTXS)PJm/V0(}|[њvS;G9O Y75 9~ F΃gZ"a2ߏAf Slm++K>Idݲjw['(>~UEd͙ DJq)ksPL wI*\5| +Q%VCc{O)% '59̥֌󱜩֖Wog;ST`I*4łzA$gJ 'pb-`k4ث@FsQ!| * є6(RJTZ> {y `y(@* Clhw;XdN(J?d@*H©s&@V#)H3@Ȗ X.XQКlGϗ+d,*d!ARB2Vv>ں& ,@Nr6sfWb  vyc5,#$^E_QE pu`v|Lbynnnw9;%f-rUvw6~{ A*uk}Y.Tr<)]^?vX%Z S9V!T0TࣈDGK\DbK텳V?zdۻ%PbO!HSE"<ytE)@†;4QA%bƎL'B1gHlm:'BHʕP- FQLvt Z"yĠ[dlf#>\`86t5KPH_*D[5 7XCJu:ah)[ݯ@)-+jzm`H0 Ҡf"հl-PZ?)zXĄ6q `pAi (pt6sMh0Sw;T+s$s#8yVψgN_-(6g;0WD, ~v,"X0QhkJ9S{$H W>S0 0unEb H+fv^0h\/ONDZwqM"BKLp̣81V+IN*fnCYd.$(1V$F*KsNHgANȖ)]{ |8я/3J9^ vrLِم-?l [ӭsK-aUW^&8tD!8K+ Vr ͜d*$IȤ@&=l&ϥysy=ѵ(ל5G<*ɟG*{#`\>~}kׇp~~ρfo-nwK7[/5ϭ11/2Mcd3 z:wkMf'BBaf]T~A VT$=+ˑ23;9>$ʿJRZs]Z͹r6YsDv|{cn/h(C:_Vr{`kv4Ĺb)C-|̤ho~_jtY/0H3G&1*҆2+K&þ[n뗤iiK}BO7/?] aw;ی89Ƕf C^?gY=}Z]-V,?'mFP.&? JJڕGgM"kсAE&,t*"sj'~XT1iȵ 8.pObS׋yL-6}Ք5L|~NyV'{xϪaφ|0rq?NIQ=ќMVV+4H+uv[L1LĈWjTҰ,(Lpy)4_ C6;:W* XY=H阮i݇q'k%<.%&BI:lQI֒dOFĒ)3f(l Hѹ^aQ5,G##^Uk8ƏMфU?b+|yx)t0=y MXqٶiUvMz7"70=/.r/C_,j~Vo$ahX!ȂbVMi** N^Aܥ  }\M ɕah'*Il1X "0"ќ■*4@ UNjlT[8',R&`EDBNK񚲞ml;)k A(~ؼf)@d7o@7EM#QM!ȡ`OdQK FQ/)Ҕf@s5s<E5}nαPXEiCy1(JCDaEq"0Z](6L%RY|u[#*WQuD-?k1nV)>rr͇MJ̘Ta6IFs i=ڡ}?=y~zI KbH΢]` ;Л=Uݴ+,;:jٷ~Y.V7&0wцY7_tHtX!3"OeE)iRp)Uܚ, 0 C`noB,wxCU)Z!E**_,rP&W͗kb`Ҋ7C(~=#ª"3 ]^B k4.zVb钎 DH,"-5lR8jlupݰ%uOo"NrDQ [Lr7kQb7?"=fz8Ǡ]qOK q*%wg/QrPJ*{䛞g/x1hXQջj0Wil4ii9\ F5\& Ȅc348&& u  %P#b`5fs,3J- P'*1a礋el{lxD3G>lM  Ƽl&!߃q1{c +$W}vK=gTEo"BREjcXƕUXq;)8֐HO.8vլ$,F'e<Ryʅu,9o|4sEXo-@F-+) 3%3ףD"FHh2; LÔlILjQh%$evkN-jE Ey/O[$΀4f9% W;QHy?IXV]i{nXѕDLNpVdt :Jk0G\C,Vt p%uȪ($DyW+:4PB= #Ԗm+-@NAe\8U,(d5ڂ:`WW1A:0~]_isig$ XcYZ3ߙ۪"e\\jE.TKg[>ѻկ_~G_"t\UX~e]($H] ѣnvv~Maƨ?Sj1' 26 8f6];4*(_;z /z)z%cLr".NUVq(tgO :j\UС8j;08Wۡ1$m 0핬ՍGb$MlZ${l-RkC}`7~s6ڦN[]R67 U,Gr IrcA-;җVeJ>Ur yRh?)Ldm;E7 7`qtZ캖N' q.k}a@]-xC4\kDE#/0iKO\Alfq LS'?_N߹]O`ճ{AO_֛n>k ;eHIחQ=ieaVAJGlp0Snz);W3H7;/"~s?mgJ|^Ӥwc88i @0@'N$U.:m5*:~ Mhȹ"#`_=q.ѹGۚddPk6<]emeecB&FˇCJ N㐊)&&s&dcC065yWd;~ٮsj7nxG0q&?ZX>4G\>> S$[3IZq]gfd,ir?4C6Ʀ*x9yɛOÏB9DA9*gBsMG; a0޵q$B9_ 9wOld/6c[Ғ'?դ%DJ{f8~48UUW׵l% +er˧r2sMX2G)mOwedɾkjUFF?Kt幎F3*5Br1G4P-x;0C{:3scH97$x-a;u14>̀//Χ|T:ePZ饓>xc/I{{LآFHms_3e¶$\˶nکfjIJc`|k~> _ }żlGWMY0xq MB`A>Ϧ kiM^JFq/(b92zjgj^H&SbVIYWtZv*1hzuҭ<~ݔ2J JG/n fM0pG7%<8VIHy2VҭNk[%Wy۬.lBg!@'͋[%J/p{J ~$SVW)SG0]-bx*iw^FIa)T2Uƈj"r်osLN>s q;YJ7Gۋ Rvʮ\m42DT&ݡhc0JwpUr#h4H\v,5Jc so6[^Lp&儛:!,刑W2vgx5X]&_;V+zW!ʙفFh<Ѐm(»7mQ9w QٙYcm6 E=ͯc ֡t\[$[,5LFJG]&_Ψ!"YׁI&ګ O2/iT@,¦*47YO$dZNQӌv4DRP{p;m3cx͕ƕQYC7n;Ae\T6[ .cFhlΉ`iD-~mqJ]w +[7t=i%Q6;-bF/bI5 }~rOi̩>]˫?:#l55O+ƪrtT?;%Oc}ZNޗķ{&fvŃ4vmɏxoݶ]bطk}[0xK4J!XexQ!"ԞL PI b۳9g_k/a=Y?Oy=OYlv, 5)9tJFoZ,,:|Xj>[P=g,./7I0@x8 A/6ZD٧,F9c+hDT" A kW'm|7~_7I:%7iCd(~a4p%UiD&m4ѻ3EZCyGo7bq'˷b:z 5|6ݯ٩`=*(=g kA8BE3Җa96z%ӎ@xKu@pAH%B*'ik݉{V~}Z w-{{ZemjODm BY0U5erSs1˺ɱZ . ռ#xv1].V q4V.jzbFvT۬}+0*ͦ7 Maݺ6e>y^#jkDX411V{^#jk$Xyػ=c5Rd<]՞hYTLBb .M> 魦\)z"3G|j5qF?hbhbLF熽ϭ11]xbGXXD4- /,ͪ!;9P%ʤPJCڂp <29P3FkVj퀟m2wqQ- Yl-Xe.$jQ;^09n`&N2f֦ +ZM}eRS #G)Ryʦ H$Z̜< C{ڹS "mC-8Ug]!nrDHתj >BnHꥤ$|v^RE!%ݔ2/QDuʨfJFnK$AѺL6%KL#sҊ㠤@%۔N-WxSoط>Mzo$c$/>L.nK89=yD|a.F6Kq)+wEҺ9mU"{!r[G%p%L]ƷTOӽPTqRtڲ( ]`=Rsw9+k:NªO6p1ϝ<ŧ GXC+X4zgB*.l='϶PLWMY0xq MB`A;COs`^q UXE70x)H-Nsk7lr~1y[j{'`$.QSwtf1|q5 8/ß|\ۧ|Q0XTM djS@a;nO&k'H! )؁I a*O NHL) )ŃrHr5 ^ \EC7tk7H.)8+ ,W\ .hMTmV%R4u{_t,UA$Rqc&[`%!g"A}(HZBzvЬ_ LHDŽ8R1U: LI6 )C,V;HԤX\Mj7F46zL3C6rVxLG THGkvCG*|_hY&١0?is6oS˫Y"5Q$C:,|jIVf?/jqoѢf|{q8b OoJ(x߷߿\3RTlXaٙ ~F`u2 /rc6*7VY+Y7qqF*jBJ/P; b:ӏS߿U IhT03l%HV9;LZ9k`U,a y>Exϯ ucvU}QG/$?Ԥu>~}^|QO9ËeR5?c{'^1cp4x /qՎa:Ryq}a`: J Z{WcsZ .oY[nAQnЗ!Ho:/޽9e5JsK SLZa/07Pqu?MXMga@Cl75 J rNqbA h1ը 4 ]Pv,dD цI*5RГkBtq ,ш5m,(,11kB,̣ x?hBFq\RL"C0I.2JoBvk0,;Ft&rK9d]y$04G8b۩% +g, c҉drc$" DEeRŔXA6R4=B+AU`wXx5teS> Hdҝ/mw`D@}w<}f35Ws;#}0[6R&yC`d~w'̧2~7okys^\VnMsx:.Т: ^>vp<ΕpqҢ!WιLs@us9g\ѷR`!*\F{n盝ӷ\qEHTpX?qp*?{_Oc}WRl[98cu{eJи5U/moMzڐx`P?. qz+6\ '4 ŷN// в'mGjoiV?teB>,^ԁH>I8{ *[mQ .F[֙\)i!B'[V#yDJS^CD` B3xLyD)rɜt[]Af[]g. NꯀuxB\)m]1#s=oUK}n\:ilT{1gWb**bP\v& 5)|BbZG\rJn' ʅb&9YX"]IoLXLܞK|VG;g~™E8J;vOn:KJ[blIP4wFYt

dR&E%&^|6}0)xB}^,Z:7 NN@)e#eqRIgoϟpöJ{\.NM̗+ҼBfʔyFDjC=5zvooDH K9(rR=pe@/V1y!14Z.NH2 悒Se$hdLU;R\OTgZ|ŏq=RwUy:(0LU\lPF~"i\֟bP8WרP^KކE\Ye޹Qḱ,]Dع^]O)tzuSz#F{>=C{ޙ;ʂ7/2 1]OR #]m,*nKI_ߕ-jx]0~䴚e'[@/&A{ZpiYLϟ(D7:Av]CEt7!%Y@MY@vlkyFڽ]$d h+'QD|9%2*E'dixo6k$HnA<(s*@ys5-,;lIG?\%E j3.ŠcVgj(<߭[.ק 34~[[lTN&;l>lI%z׬gİ{G4"a'K$J`J[u%S88F&. *qܒ #a8P$H,Sukz(n3: !<_y}3vje:t`Qf`poCҌCB*nzM V#U%Le"=!RnF@uf傄[UA%CCKmJ/[9oyܺ$&t~rɤ&Cozq |ȝAΝA5wvހ/m Ra(!E_ψfJOЖ;:GI_A@BoNu>":8F?3;UZ:CZ1 ]ӔbRiꩁ;țyy3 oT7V9D)8 T*I*)™ .HejZ×Сj[*ùM rkm5&Ԉew8Zk| =\sg 4#H$*T g\/}[3.9o }ՠjZ)K)H"^3iY)8h͔ ,O YjP>| &M>G8wZ 5<^],H9$e4"lkN"I"-{$ lȳpɡ^߿ytPS:OwҀߗaL;^5ȋ^5.zU xY(2LjQфmq$5RqY;>k]RC @UQH DN"8 hh+&zīmC'瞏ф(-h-&4-ݨ].Uj-nYYԣgT#=Fs]4ui-vE1Vɚ{ H\.ihYgT GsޘDR/e 1'K @mC7v Q gAE:4xQADZMDҚ nY ."Phg2cvu7;NU .upI/GƑ>?[ .rƑ;a70Jɢo.Z=GLon8a'7t'f[>/iW]F,-nX@d:>sA\<'Yra M;ǿ( =N86nN7R`əYjM2?Hl=aKխaKRP~Rsτ+m6]Ѧ(~`6V#`h# dZGf^  ˸T=%aBI~_|bM 8NJ +cn(a9&MM RPUn]hE͏чdx|ȳ)`T޷LqN.%Ӝi0%0&COd*/[vU n2kГbE' id"d("h"s4HI'mlMazo:GeҌUSGԭnc[C5U#Ӊ e K;NP,Cȗj: * ZQP C>!zϩ.>̓"%V`Tn=2]a3aP':[Z)T0<IpX 9A'n1TNJVnjf~ 9de%9ӱy VF@ 5D_,k6 wmnv0; rƹDgVZx[7o˜z 8}ɩ$_шd:ax?r ga0K..ZM:|pJM5K5K)T)ڠf Քw2` z _x!r>{Mv ;'A eLr9..j%LvEFM[*<t5m{ط,.N]7,oX}f;.렖NџgHBAKWϟ&mD1zŝ?}R (wR6:C;:5o~Mw"~$ɻ0H0Nwt|g <0LCo~'ixBW B1F`x`41*]bX:g}1'/3A`Y-ʷu&Mm>ƛ%y61ma}N/,>lf?.cTx&djp59-1 åNX/ME=Y+G+ 4y>jYSK\ԊCi[hlᅶ8(Ł5RIЭΪv G_YUSP^}HeѼZB Q!"PXc/XM`{3I6$NR=t4!MŮP>6ș8| )_[ VNsL$2&IjmLE@n"W{"ӡ{El᜼3Yyg&/, ;+1&1nj3~̨TnQIc- !O3"M5hZ'ne -WbCSYmjUtv-ٚPhzɛ7ƈ,!} v=Ѣ|%SO(An+({4wԔ .-]MB s'eP}sjNIEY =7A ׄWo DZi̖^'1*<œM.Wu9uG5 T/R}.ʈH(Q3-h<`!e*l+\2 拱Lj@4#; .UZ4Ir8shtk6.Ƞ^X+kINt۝B疑dپ$q:D"J 5 Q7Ÿ]@(|F GfBw;\Pӓ KZ\vv'2<rï#v+9G i|KO5YN稓Rb1??>ϡ/5rT٥1j0BxΰB0DRQpC5e[yg! AR9?7T4w ?B4g0gr)t:y HgMiaȓ-Nnq# vB i!ţs[˙~TQ`6`=J@ߛ/2& P@ ꌲY& iEB']P`*|8spsNvװs~A{ 诃O$_*gm [Lnvj;xRޣwxп% ,9AkK 7>-$1@Գj/+^n}fjY_#κD겤,csmz3ZG(Si4TUTma94kGg\# |C]ޟyɿ/?eE֔ 0RfUǸ^L#(fHiog)ILIp|"*y.H z`9e-ܻmb, B bے0B2Ntڒ׏%bgs7#~ A)t20 YD͉7-QYAx%IV3a7d:&KO!y ?~q-ZXD*aBE/'!>')SGɟꏊRMmΉzQTzvC쓂ZV[6d%HJpe$Ǿt6&['篴?d!~Cڠ@g<^^b-L&+W3nhs ; ,H̃(C,ysAʚz Hm["|MǦ/[#Wv[r"M S!_pm19 Qp6 e2Eܵ25_Au}RpJk0W21BF1hp\K4rlH*2C.}@k6BA $cڴ5 /ӻZւ6>z*ђѠVKQf^KK)2ș3Yղ&I*ju;E!^?&A9{.čȊYsV{?}}xVQ91ںghwBu:jq eRX@a'-{,qEs5^Ëj닢U/vҗ斢G9^DB Д)*3=d}ot|BiJ4xiZPƎtb40 Fwɒnvjvoc6_ߏw?Nz`6_Vh~/9Ųp dX/W7 װzD.Wvy> RlusGFx@":"U#A$ `B4*TF_9p~2wvR02=zįXUvߓᬅ{L5x14 m۠)75duK&b!PImV FUrH$4%cK *1UL6C$GC>gZ{LZI!_w䰭e!C:$ y=Fi}0 e> #X#UKmRLjnPRb"9lv`Lŗh[|jl($0;AЛ lA #nރw+hR,ys Kʏ^H>vGocg˚?]?_  \f_or8i7_go~)M7쯃lA$)M.AwCJUy e*W_$he\_[7H<|ʦ>M-ת5GGݴY5Ç!ħ]7´\ܛ/t{\1 7~7T -in_)jģ5tm#Z.DE/ ^qAErHjy0mx`Z2N@r "楢]o&Խ_!j&xB␥$HlJk#DUL(SF k<*Y9l.:C  3P%;s̼"7sf2.4U'pC53JPYoXH;%`{ftZpE۰Mٞ*Ե OD=s%nP d[AMƂ!QB2\d%|3ho:WN5q9$Q_i%8K􍓓1BC\S0I6kCJvauϱZޯuy)%ous/dcmRKB8\h7LnVTyr)&(T+0Hin]_3#%fQ;m|28wݎtڡK+Tul;}!8߆C1ug:NPrpl@}?VWy3ZgSԄ{ՎυJz+p{_-x.?c7hm"C?_t?6nGa9V/a1 GS67* 4k Wv/om7ː㻎avGjWJķAA{EE} Uu]ܪdsEBB([\2&9Ϋo}bSgݽh8T>_xqQeћ{<+|jj򯕢Fgʚ8_AȬC܇h^v&ww6U"mdd5q  EP":2r8~ (+9{f^s~io ix'^ob?3{k-Vm9ڤ<^JMU "&洺 MssbeRtuNe:.6y%Hb*Jx=6#֋.|k@@4`4 MUoCsmuln `6P7 {9 Eqx8PWzlXCie7 dF3'5nVjob1 "8V{ lD,y "F⠡U27%Eb*X섹9pxg#V V%HD#v# bĦ"1uJvP.%O;JUj:Fb5prsϼXMug౒ n*IG$6 ۿ/ooj˸r5w]8KCƬ>GB~1:\M(ɂyBho{2?V0$E!Ck.aqAlȖooIږ:䝷ȖYoaKj_b @K%oaKֿSY&y歁έ}uiuZ4v ˋ"vQCŲG.oL֮NibBI@R2 ),94kIګw{+7^I#1Y+$n6X+·B6csf1~(A<@*i=+:Rr҂鸪yԷqt1!9dB <bĝV=P۸HJ*05~KfE<0PDW@,5ȃfsItZ<}^@[cXmIj$ ˯jM?[PAWLvR2eB@(qd7Y)r m r./haL$LaNvRil9KAi 8#J\hr_\+݈47s[|[l"T:k8C 3tX8GWH'9Qǜ%&S1o sWN^R_F$J|;֖[–.S}nj&t<'OY'ѽE9̓6DUNJj:,>CI^[WHPa倅} NzՏWe|i~UƗCe|yk$AWq}H^~zp_F 9W5%ZA-x1lm[?}s|=˽m_q뛷f+X|]1+""<߄/.w/[7wJjzo4T)|c~RIƘdK^4tYFnat{ۆ.Ntb{oOjC|9pݻį9W2/9ɸ{ζ XI 鶠>tAU憎CU;U_\־DMy+k*J5VִZU8_'P)I-88[kL)WE' "@Ѷˌa24QӎYṈy>Jڵs|{PHiŇjH:t> :Ϛ:(٣SRYkQ׼yK"))j>E @< W҃t1Da0 R\\%gΒ4!%WV{oBrP9\a Ftv{YhJ;Ն1OCրLj7PDl6:19r|4ޢbEXNOth83é)PF37bi&nE:*%öO|1AHo4j:9[h&뵢&8}vȾnckhl&[>!JaikEjC[YbFa+;[Ţ%kI~4FQڽ0k?' 1nArj[ҁǂE/A,BfȂeJEjɝ۱T锡Ğb؀`,?f -:fG} уjЗΡ3΀9ie@,:KVNhdcï 'Cuf K3.ݬی;d$jK.q8+ΉE6oZ'I +E;Z>fdzn8Xiisܯ Qzr#6F~G Т`Xx*=Ԫ4>:'黑p5.Jío~SNVTrd>dyMh\θ9QJ 5\1=N@'xQ2-t+.\uai ?/1=!/c3h8SrtlSvCQҏtM18X)4wU!mqb!,V?.`Bƻ|!g$D~.< "T *1RP*GUåfU_}ϋIȤXLH^ 7h2Ion';wȦ@LJo)\JMm&~?OO'ZhBխՑ9ά!DRfA-fe4#D;$̪6f,8;hMߧdp;;f*6b^Qg[ƇAtGpJEy|3Ȍ6^1ͥY}Sr KTIkj3pf=&|P.AqȎ{ޛS8w`iB^_)v: Q%Z_b;6'&ӜodߵW<^,zų^j|~sA[tQ  . `M *28]J#RB؟QiDo/6s&m'z70Oh䳘j62-9'PzrS%hJJ/ײ&%Y{Eel ѳrt/.zҖk==ʌʽ[L24n*ftwOm b[=jx_>f5ܮ+NU)tlVg Iq Z.dw{zq*91[F|o]IF۫NSd==^Y:f{Ζ(e\rmZ Ѣ"E575̲eE{"jh3"BLў< ]M[ۯ0Ix8Ks o5 CQ&7G ٳo"5y5:KI=B;ƵNJ aWȷLYLY=ȧm\g`e!( }i+yVz㠔Gzk;mQk{z|4熕 F:W@9`zc,Uj> )7^As>[[0FiRK`-@N50IGr֗9r#Fzbh0sSC0'g_JZ)-`@ƍ#v֫GFјJ)V {U-lJwfG mڶ;6$@MUvNzM`m=b iИMd#u9>2c&i ӭ<m8jXq,Gb1 0rdiҢ"_^2WE^rv-'?Y mm_u.婽O7}=Y2 I~ۏQ1Ívc?v9XOXhs!j"xv~;*/{eO\ysBP[_WXSmKϲu-O=v/ +0CKO{Q\O1 .vvT0 'a O+qcflݕY(W: q U 'Z,1RIg% as0Z _1*%۹tPQczgjȂ&(TQ6}#pY<^&&3: nyBw\n ԯ.t&T/GA/χZEga2cߘwŅS{=|zyd)CSv6Φ`+ g4xsZ[\ncν s*τ-K>c^ZD-F˰ tbƶn]~qڑ)q o \2 LSP.8$񶐄{PR c9r sF-^2}(6=?gF71V IB5-)AUt9$sp B$iKš";95ypu%X`/fqu>y40ޖG/8H7!Yf':?B|;r0\HBH'!=J)A @ղ4;Vဈ_k~fadJ\@ϖ/tGW%3VˣoI[0|,/˦;x2H)ؑqiX$H`yd,>;<;ow0I>$X$kRC[{`,9.xN]f'۟xGTZfTL4ҭ4CLSbKet<J麬-NOgU(g[Xx"RhD+Rt1f$!ɓk,!ZΩP G8ޅtK_S f0Cmnx6t0w׋MC[|ξ~eS(Rݜդ҄JJG7=PLECːU)3ژX~#X_<,}0KrE{$2&On+~ ~?Íbz;2z9BDzSw<+ϧQg&%ViE)Z1x~wpdAN3*wR. 6\ $%AƤ?O.EdLB˪I=')>A!]. lEN w #f؅HT1ޛk&C ;e v撿|11i"{;EK٨JaM׶'w<`O1'\ʣzUGCz]PBYeNI P0<nfo&xs ݵLO<|WOKw,{zwS?/?@k88%y{v,w,I]?Bbe5j=c~QrB% y*N w+<^R /a̼"ko۫AՃ7n|:3gW/gQջ@g\ u)! 9X DKP9GyƆ(XB -5>,8X/<`Z.S}&O#Я?O2!806֑D)Ĵ[*0xB0A$2Qnw@N [0Eؔ9T]ΕU sR@@rֆPđ'6o#֣P5H~%i@pl%A *NJIT\ [D_kY{RF(Zemu( b]Z VNkTjJR[~(LbKmdZ rD(Ǖ*% .Wט0q"`%֒P K6q( Jm=F6")hm@qQ =Ƥ޳3fN rVpT]a O뚆3xZ0T51y9J"&A1R6`R҆8<{D 1*,Sm}A2G0ekoFْ1+D(դa~Tg:?|.Pqgr0A%&};M#\Sy ug~aq*Fû2jD${4MHw Mp|w w 7 D [|ѫGkvM8 P8V )gk TyxB]x_Osy}#Uү}// ȑE2cbeC`ֶbPU]Kשw8l# =PZKJZ*.FMF}WL}txʛ}"p+RDXM+{ cy҈#W&Aާ0[ EIeJcLkPG*R(l釛!@Z)J@'?:lm.}DVJ~J>xr ˎמ\lɮ;X㴯')I[.nxrs1V-svr%;.zrR[*Em=K ~dX<)xD~ ,JCM!,+>BX-4tB$\װ/֨z"N :3+F*ǡ17Rd-K"/\}WT5_܃hb7` /A^;O3. )rX2Q-l} H3q=E-V*AͅTnpnƖ^᧾`u!Va/+_;v/釈olqJTy1>{KUlyֳ5aq}l@k) δ. 5V#}!KdcVA9WA'V\<#9X=WN(U(^pkL%-6#46),.(5" Vh?FD ;A0a yh`İZBf 'dJ'ߛyV.._i&.}*Lzz< R=4}NIXHSP ڲ{ a|HYu~?~alXǻlж% =9+2/Yb'#BZs~` bmkxU X9@I!K` ʲmeK Sdg@cFg7^T?g$V*f<۟L`n9 gyhמ|PwaL Sղ?IHt7nq>/?P??:[ \vDh/Hu Ð4>0!Xm ·lA稩\x+ J& ɘ)te$PY7.Q4wutr%iOC;'Þ IgNcC#N8B=IטPNxSgz8_!'K~97#d_*3('A!{43$%1TU]8FzL๝k$lEJYy$I4ϫYMazB 3y @4FO ;t+5r\Zd}Q/Iʔf1;i?D!h8`>UtB0FX+ʵŜqQ@ ,xbڣ6z{n_Ыq,v5l柭0Ibv-3a#$FV!b+|4GpzmE "J$T:шjX PZoԚA zZb31#Qd5J+ ccmRk;FMІ3aX60IpQݔGJJ|3PX+MA))(ST3,gIn0Py)( ]OzW-RP|RcV'NN`(K(4&#Ljδp6ܢ=:Ӱوxԣii۾ l&9, SjDfqBO>~[eI<~i{9LN #YHL8p ^z7;9#+{I8ixRf a="$]8!^&$&VX"Z8rFa:j `"A0BXf%WZ #y L hmej4& ]hCp=L1Qe R) 2L9'!rZ|[ZdW 1%F)e5Hic@bbH0 ^ -`TmwLfZiY~mͼݪ՟ۻ^T'`:.ǚT.6ڼZx<,~խ/=jGҮUS7)oRSIMj 1X}Դ)$P p@lW8F͆?vO>*XnOL\O<ӎ-h֕Z6R-atgé8I'+Z^O_z1$A^IR/R/M7?x8bzYdl'Qg?[FZ1C0Set蕲iŒJ@뤋)J553UG+-I$}aC=M)5tʫd_I >7Y+|W6e kԟ:B2+[,)"^FK3z2IGsdx1_ \(Oy.~< ?d6} #b?u˚-ȔѿZ\#;yr3pbof׭(ӪrUJ Vq |q[&fm@Sa6YhDbVKw[zc?ޟ]X Q\σBycÛgG㽜߂|TbQ1HbvƋ&U\igCf*PJe:C[ޱyVkXeRj߶ `u8o x( T١؀MP1akh i B?r*cݜju'~=vYƮI5܍d0z.Mˁ Ջ_qI1w-+#?3 2wi``Y\f Ou.v>G=-{oHi$(}{%Lɾla|>֞05r0~(<7#tp-9cngKG"{>S=1ȲxO+>Ok|a]8ȼ=ɋX,1pNH+@++'gЬ/[B,d_NnЈ|z bͱPa^dhCIǂ1ĨU3|/)= Ú-xzi9 ڢ?Ƽ_c zR_Ew-{j5͙0ڐ-*:haӋ; *.sTHGQI! wcx9_-jSOy<aĔfyTg2X$/F,]{]g#ɤ(6ee 0ۀ#tyh,rQZxeԕvzJ/# zk[;4 =Y++"{v .%u-p=9JJ6x 5=w"?t(J=3֮$Y+ ŏ)"\\m\BZ>m}祫rN pk+JZ`!m&{TNXTWC8BtJSᛆ]&{]np,u91Ec74oYٗQV6Ŕ;gOFZ*OeޮiڏdF_ y ?_/ICyϾᓹ]JIߠ̑lAP$_`~>(&VcP U|z-ۄlMQ(U0Ѳ/+2E#o5#Yߒb)hB"%[Q8||9%61'w5#).etyA 8b922mp=ʘ Y9<ߐ$u2=T’8N}6hG!|^ֶJc![?`VWuhtJFSޡ<"ϊp9l3DFG]Ąx>Mdư2ߕjQ_ ZA z T7)w1Q6E*ؼc^afH,X&+կ@yx")EfOAW7iW&iuP`&dnITbgʹOxwdp43Ld8W۾޹X'ـMte#Eh:g^.|f*vhZG@(b7 {Q{AP)9V)"xR-Hb!*NkƑzNTH1D#&hK1Sk2Dr 4AsLD%97ϙC)@Z6rnLiHpGuάvT4 LuIs_(<2 Nꑨ%pH0KBrw*mCwd#`uͶŚDH{8l?&#D5L9TnH7vK`ZHJ5-!Z-AWR" :FrΉ<ݤ(Qy!%n"e Dr':uXP$h4J DX1AcPz0ޓ,W d> C%$OI^]h>ϧ1HJ^a,Q3uu]]v*Ԋ,k6>5@m]2g2Zr#AKaB2K.k+# PAÎy̦|ΑI6XJiOszxv<f־At\FMISDr3hr3.9H9z%XF3 ,1H \hC32GȜ Y, a9 "&cf|̀Í@<矟q.z>^wz'Ë'%Pzk(>^#*ǃGeō鏒J__-tLє rgEPXÒ^YPQĔSA)[toزZݻ휐)D{ }HkүFi¤4A\`58Xv^|}3xaB06Tcb_U5"[Srxy h5:ruN*B ۳ZDE}QMtא^UwӲW[ԃ)5ȵh;B`Aي7 m%3 5\(9edFB. .]p61JH,DА q o1+~qJJqAf]ڣ1T(Ã!6)'[7*tL|4U{]=iV ׳ʠT׳GŹh1}58Z̺zR5ӵ,6#"1L?yZETYneaA5\ D oLR(((S@) k/5^/&-NUcqSQ3O]K)B:{] mh< Sn2lznvvyS 0` m[L{ܾb];dB]7 $coXfo 5yWe ,d AVBzejCxa+K+#mA^A}(%>\_Id?_I!&п׳HJq5?/-XI-%2,EVfzλY%wqQ,i/+_q;ͼHF^nk6kKיicBbG'Pr&4Ϣ󰟛EU TU*%7dZK 04V~%Ď߭Y6^b`.tr)E9t 9ED2!wё/qљ`>uS\z1{>yǃ;%gԑ,#V_' O󫏇tfX<\ wsԝO,E /N 딿v,]_ƿj!-y r_ݹ.L^KX渴ZaNt|̱>l!'uk| |i13`M /Pm pvύԀjpHE"a@Zne^u\]񍮘'cP, NA>)ǙeN" J2typ:rv!!8Xn}G #:;+i gF0-nY8kƶ$@_i$ ]4B(Rn]5PT^ I{k\N@V6f@G#="`!d+Q(Q?+KT)ٓ OЀ*qeFoHf9j2{c8VyΧ &r.3, :ʏJrfb5\:$8I>:ΔMy~YEH[I^Eۍ[7 Phol$5iأŔĽkb~4yc MVpPt EW%-!:r AuE얏+j$e/|ח|NM}P7O񔙎,Y{;k- @OԴ~e[l[.)݋zv3wSNZwûw94𝟽w/pG?~ny)Y f18*r Ǡ?&"e7>gQK>핐V")(+SG:)4/jAgVqF*iW5ڳJ}@\ɂcI,K*˄KBfRǕ+͡6LyH*OH-*I-hnVQq)BmNH+aqO:kV^N}u9Yؚ]S1\{1?ugz{"lQ篷?[SV) Ld&OD”(G#:m+"=d0H;h .?s"?ˉhꉨ0l4!$d@tpH# YOTEa$'f %{qtZ-z5q2z.2S {րBڝKϣP (Z j::>h&-pk@p3@ |zRrJAsM*| TKa L&Ui6z*;InL٨ yϛPPdl8j ׹ =$5$!^8fHoI",Bv{(5OJmx{+{kxSZ쏽_/8QXrKqD')emx{s@⩃f&Y Q#xpz&X; ]g ™n;<؃1/0xJJ 8n$":qv~܂z8pRw]s#k 1JxsnN 4S#_Z`ԫ8~l[ "PPYq7B^{8;N;ZͦSvz(X:NݞHcKLwnۣFb#{'BFxF7c@(v~Owd TQ﹭Ir[ӗieO//N n2xoOYr %nr8WV L}&n !ͻh0x ?{WV ^z1:A  WlK5ԡEP> fƒkں2-g[ S*0ѳ]0M&O{.-`Weʺ$"0!!1u9dzFEXO::& :#om>t1uGhiyy]I3M6?Z?1Ʃ(Q$k̹\]-)_],W\kzn7=wRƁkdeӦ2~?B/ՇwGˤmik7zmHe=@mHwYΓrVMÐGJ٢oQn+L+e<7jmMiJY [kip#}pXt9RؕWtA^]NμiNq@7UxaV5d)_Qr"kNਐQFȼ1p=Ɲϕ(e ;uQY1O$,IЁ`S"YC~v ZvPx: "f"<&x7PD¦ٝ:xH.* `:jdDJzӿ][..$:!,|Wy@.BN^J#z%QNQʢuZj{-``XsjC÷^hSDl܌g=bft oC9|W;|g];7ˎF0FЮ>SS sI1^>_i cOpQarͲ쨖%Zs=*O6&W%|svA9o2 nfC`US@%T8\p/]ߤ9}1`d9c8 x݄ٵ9j}p%ȅ b%s(ƽNx]ccIZyw^d4'8`7㳛>9:>%%<$F2arv q9[ݛA="3mkݜpp%j)&z<ܤI,|pQzڠn½g#],K4f<`Y<30X]p5Sy/8IPvqxޮI[EI+D839^1N';qk49kT0**H!!9ɞX㻰o[.A,Ґ9zt>|#|;\W;eNAImߠzErѼ{>RNXy2K݉kx< tLvGɈ 瓖*'U *M,1$MU'lv=l~+aD.6(ǫpF섺/Ȍ[rUMYh˨p0JLb9T1 yN;ocq0BDjǜl^O%DG{E}pץ=j0컏TgTO#o9[\hxqJ):@'#5VmJuYx$z_:lLszYs-1g hN?vsZZZ0UK9y #T:V%JZQr.Wkges7~Ov Y'4>s #5K7X:==KmmJ$'"5*hM0|FΆwXr0Gĕ-_x뗆LM4^>2\V1!2y63퍑Gî0k"-9ܽrX"[E?r~&#BMF9p4Vܟ^LvFn0sb/A5BmZaa'V9<|ASm?]UA%r.)h :֪֌3VN`Z [CgLݵB%Tpgp'Q70$je:Ρx_o"xf!rd@-kl<@He;HF<{qjvu>1pzO`YNQ){7/I] -~ӆS96lx"i\4nJM~]N lDt6r 8(L'Kٽ/~s-x@4ECS3av;M}ONUvjyڝ5~j~AN6\O,^!: 'aum84pV^4g=ӯg8sYeʹ5 b曼MZE^ rv9T OJ*$<'a3JXl:I̞Xcÿ@ vKy*{vZٯffWT>Ny$IrWXj~;o5s;.; ,P!As?[`;I. o=Pǣj|ǭgzzKo筁ҥWqRN:t-=pE#pgldH?tz5r{3R xgx_Ɇ QkiT eځTA;vipHzwc4j퇹\Y8:2)14 ZإsWrBs^j^H@c:^2p4uR;z|?]edx7u6+|:}c~Ǐ֤fƚ{{}Y|E8y~xHϧ( Zڜ x@7zL@g]2Yb΁!%sV< PM &t$xTڠemv9j4_3]/+|}DA>󦣼^ })5V'rurL,KӵAt؈(\խT*$z U\aA~X7Hs EfdA)5]I#pTi1Z$8T*7nW>(>`>D++z@3l6#)_//Ap:9L0'2PPznAAՆw{Kg2 77l-[M/ɚ{7 }|b7+O9tQQQQ0X3Qqk\'ȏu !ItU.F!*i`6N:p1!" o(_QܵFtC>2x+h*?)bpXC:Z#PhF<:6u`КQ1hceM$5XY;۩cdAeGV\'<đW8CV &3R0˜ԁ LN[ FjőS-<@) Z9/S ;OQIO>Q2:^iᖠrB6"_y]xXE+Ou493FeHt qܭ¾ wgM5qxY;pFGShqO}woS8ǔ.;֕sxH lnX6,!3.%-2)t3`pWT$/.KEA낼#W|2BL* N(Y#Pj[E Sq5 l9sef0ӦL]DD J讔I5CGX}" \ e!Kϋ1k #too^P"+$6*Wtp.(de= t7)awd&K7t!͓R19_n{c fh&.t3*+WYЇ5 n'T/u d.W \/h /‹_.&+2%E0d$@+%))}D-&*3"ҙ.4.>; ً/񞕡$ȋl$@b ]7RlUyN>`D5ntǛcNI .X ɀcBcu ]u,]ϤvwS:܁1G_ּ_ּO@r)zmY6q鿢.yQ]$ $إWXHXؘf@v{e o[Msrp;2f[TTz~9 ]d.6PtIZ72+ZvGhix)UyDb_ 1\={qy!pU,h< q}6~޸ˋUn'l?O2y7zq.9ŀ˸BL*eeRe)>e3xڴytWS%z=ɲV09+,L!!mCf]?Ry_xUߝG&A?EN\}i_Y^ Ʌ}xͿL<)|9zg۝={qszx ;oi  ߽ 4jw}< ҫ>On]5"|yvue[Y-R{\ԴvmN2Km+hOjy2dfW8qZ`j5N}ۭz{6LC龄!>mP{<*ݱytnvI-s r~X͋pH{pz tҭ\r~~öK-_ 1_UB||~7z/0O9|( vxjw~Ӵ^L)p`9RAn|Bf?lN\2g6[}^]m{X{oSo_zm^y`,5_ V~FN}4҉_I=`*n:9ڟoߺr0#oX:{::gp]l%}v݅nܷͣ>pv?h?>'XY?_ҧ 5C_eg!P7*5npH:z<~*_tҝ Paf;S-ip@J3-Ϙ .3LsN vi3m^;=8/X-_6pJ8D\r'T~z&$lzggp4t56zZ6㻍8'{nV_m|-cq2O ‰櫉,R| RJѵ@ @*P:(1Җё! (DJ0$J M TxNPk*PA *TPayPa0LE9m< dxc闸 za`6J^sײ]|`LЕd;R"=!|mad5 oh4 G-QB TOR4g)`.$ Na^tv0c, 7qG?@ƹ+{hs⪺$WqV\݃zQol1bpЍZ0}JG9tv-yCer0x7[mhw~d[yƜ.Gr45v `BLzr7O+J(DM8!;Z %C$&iNZw@7/ vN²rT쁁>LTVL꣠2؃dpFX,[dEtQB aiq7B""ЊLI+1ag S`)S!HɘXVeWD> g[Tzw%8ͳ6YUu}^CE¬*fe%WG(=W B @Χ1PZ[)jNJQ1іX#==5RO%=8Wb٥UT?6[ifwU5H\p>[+:UH~O;xHg9Vnuq83fQwEazp VNJV.1٤^\9Ub)@rm҃sr̞oEOI譥ZmQAG%-Ue!es.RvjB1*={h4[JnV$7Fu!ZvN!PXt 5Sʴ]Hu.yR9×~ϯ<{'tǭP3\Ӡ<I`MLLJ ,Z¡Uࢗm 4D+MEYqQ>'.?h{QqQV\eEYqQ.r?J)c2#_1pY1km%LL8Nm6e+ry=mfFe/ qEpZ`PZmi:,7̡&0DZ׺hǛ%Eדv+̣]p~[J)ӉMR&BDEҾpϑT F1xjWD16Φ2ZV%*>TŇS| ?V*>TŇPZ^|hJĢ yԒ<B\q"4U;I!Z;7[9B Zl;_F590M>:Xnk d'W CHm.zI2`sjR ɽ5^J[Kl\ WK^.gF(SXj-8RՁ-$@uog{q@پfn[^ `BG"H( EDbۨ i!R EoEr}U9N>X\_r}Uo)df`+2g\qJpJm7ei ƘyM)R,U釫W^gJ.Xڂq 6{5Tp%FjgxǎyV^2 LJcڮH6s6;;mm+h[A .N'3_J2˾>Og"[g)A2m-,_TyNO*)s@Q )GcyG?doy =u[ZEEgct\H;CX{Am*/%j-J$S! <+OB>O|*<3ޫ똝?(X1^ J>VW"mh| xY| =Y(ub8PAi1R=@Esan鏧J!qFa:Dp79#$ (0ʈO*IڕH TT8U3U TT*P@ńw2pPB 3_7sa5 2bpp@\~#ṰSD `@JjJ{T+߂Nc64H<Sɹ TVF-U9# g(Dyt2@Nj+P@tꫭ@mj+P[ .N;(3b6VYS홸Q0122c4`gq<#$ *u9VzF'53ב#L<&N<܄8% (G}hC֫|9BN562:&2%w%PޝqhAY=BhmhDT!k7;GۯFn !0\)m? b 5?>al{}iG*}=֛jC3 #*%ߘ?QN|zr])JFRw4t4W>hIH족q}nNZw̱7/ $)I={ܹXB9 A_vbE`7F 6G^#فF]CxFVa9dCz4SFR4U[QA _puŠ?.7>]{EarsH'*Hi\[T;('[z2:>h:>|:D[Ab~yXAB:pqT0T{ ob`0ԜjFD%_ŷhcVc'  3$y$_77"n~}~٤n~ڱg_t|w>ױr}w^m>\'L"%÷Mx_8ozBjtqM/@pmg:S5)!P[A3,bBmeȿ;=9Bi_1}u^nUФd ,$fJk)Y&(]=WO\'ڟ)%b>.Y;muKIT;2hdϾxw'凛>D/1a_˻gg<Щ BwT W1~ҿu޼uJܠpE':h?~SwM5Y0Xoy+.sf]HOO.o/^\~n#P 1imMڜMVFV#Ь*Z HSd/O͋*-Pe_}ػqf(#̔JЫN)  ^{t&)I:G &IvStTU'dXdyf4Y1mbrP%%|Lܖ01AR@50b!.r AجS!V;D^nTm?]]{ sco!x(=3' y.6 \L4/Tb̦LyRjPpUyg3VLPj BF|ArR 9Ś׀F=h3,srfܴiJ g$*QGۜE[;j)LiIWcrkPk Kܨ܃6˛exL3hʮ_>i hF^T?h9&y?G,2h4"vyxr CmAh޻c:Ǡsa98IuE@..?8]JpއІw(1˾#g )msŶ2s [J!+o 9+u}.0&sκ֬妢ic&(( oT}VV`8UY,=w/7Z>pfXz /}t Y2 S#AT_^29A!E+R|w;6h A,kb L_|8+|V,뫟ήIĢ.NLi=eeN$ ]m~ <ɭqJ$*Q~ lv½ 9A9޴28c)%#49V@(LbB̞ #s _sҭ"Q!JI9W p؃9-r`}2-m^Vu4!ȁfҧ5TdO h=YIP H<' :ñOsV]{ YaYK1۔3&bBV(}mtV мf&w*|)Ul`VW.{]Nbv8-:yx1f{zE=N Y BǷ R `E ;0μ aoy„h# F"9<5 cBwZ P&Od4D`/Rem]N^"L\D @W2x.`7)~H9~".'{qeۆK~R`i.+06=$]z.VjBP !q# RdL,ŤVuR1ϫ!5Ymg&'y|w]/h96P7p+q[ǻ8I . HcR)u RTѪ"!' ʵ3 vb>i"8w![3mDyc/փi%ZiAVd,">>0'iVk=;ݻwdPQn2REu}G*LA&}+92 c2? b獞0u`U!νcnBGKA8({w ʀ%$.R5$G?O1$~b(Hrv ?Hց.|wؠYQ 6ahOG_7w J:fnJY(RWs`1QYp޽ + )/U%@ DòF樴ǂ%ATϸc<*[JH~L){5  5(~˕3ǁٯ&S}zvAp8vkqy{͵uǏ^R׋mrB0^fE Rrd)R$j/J+rn4e]N-}^dWЙ`nyx߬=2zczj7lf]-P+0j K(,޹*{!&v_(C:[[K"gֹcRwOXdZv~/> oqwן9/[=}pk'oECz.bIMҩp|)$aZC$i$T"U˪޹ږMn~V<L=;\~_gc֮ AyՀi_DDM`#Kϯ bn/˂fnVO "ǘfc!'xBI!ym%)W~qq?CvA IKvzM} !]~|_Y^vsj*?,TkgNrTc1Hq߷ARj.$-Y>hS#R$&`[] f*Ŗ‰=.zJCJ@u,Fڒ >mH=za2dKaiZTYgCR`8|m~HHU ʶs@BBo4@ 2X cJZ|opd OErmP lL(z[瑧6κs" d!M&!Qtzк@H *}HA"U,joTg``R<>xW>v?q$xϿi zpr&}~U>f^$MmL:.jF7X. L!I2L{ᬅU4gƖgˏt\5AM ^Ep>Ś-h[+_ >,<&__2o?۾֯6['PĎ#X)ǫ_J 䠞)? ;c3p]FB7az O1 +1lC?J4k`0dwy; Ejm4n,>$p8*zȾ7ՌhfK ƖW,Œyߜ䪡\3x.ma4S@ټۧ@O~>o@izӵ4~)p8A'i8Gq8}M?` ITH0BPb]͍H4Yi:jx! = 9L X 7!DZ :,RǗ$#9&o/6ƕlmXm{v䭞l&IdM4JFTU-m#9w|>&$h88sI3~]jGX-=ҩEz֨.8C˳q.ȁ.BL!uNAH*1YB=;!ۢo1o;q;Bd2/p`N$ܠf 4XG}.T RS"#Z(e: Ьҵ,+w1߹! vm9-]F}ͼ#g!0֫K2&22Mְ؆qTT>ZV^99?<*om<`7ag7 ׼6ț"E3S5G8 S~,!ћ`B>~4ݣS!᯺oWv>ӟm?:V鹟L >Y9D^M̽XlmSӶ.ڦ3]DPgUXݭmt4ncsT*ebEɾmv3K숦(<_vd[g[w&πKQx3R0լ-Jܦ6$T7O%R^#-ٞ\psaӃ{7•6^q]/-!b/ [ZQq1ϬJ'FKAymeAdԧ3By*c}]( _]_e$ YjlîgA*Uz.P$TPZt'0/*CXUYYSH=^>3nKv,M8I% ,<kFWª"5*4B*u_E1s'Z6R *%%A*%WQYYOm ]V}) G }<^GP5;5 'B33%bYMdȂ"i"lMS<].{FD,o ml<+B8T>歞X֚v=lĝaj vnބR#d[FxaU:'礂QXN$A'b.fo R[s]sr+8{+ay$ EHPO/9 u;x1J8#Qńj"*{S'1!+1KufGK5S~E͖>Ξ(AN4N򳡂ß!M. wѩx.JoțߚhV/u9ЈyIzG X6VX0,AσxVǻj.Z%*-%cx,$XkM1?R!RSd 4R &HhjBI`~hUyy "}.vm΋ Er 1l㼉R`#8>,]ܵV8n%VGtYbV#y.Hkq < KNh/:QZc=!濗ŤBBQFabM M'A4~Iǹπ|OCw3OΈDӗsg6bs6>yi$C&p(e:#vo;;Mr{CZ菽1G9%|dcg"+EJaT1kH ٙs!B"%C;7#.v4 և:ֈdz6-Isuf@ݖL4:umŸ0Z֣z>+Mmbj+{u%:5hBʓඦucuϵvH*X1ڭ y(Y1KqcNHqja0L,pԸ3{ b~鬝Fn:L@oݏ_wӱѤ~/ λz! n 8ڻy,) m|g87! 2^4~/ѾGu)1Plʈ] (Psfw\SӟUA ^ۖl]+.p1^Ln8Q*h)v l:0~W%B|n*󻩓~rozZl+]"Ǡt 8 ͜2<ktV20r#Sg1T1̣ !MRA6MD!ܾO#>?Gw!]>qOOK,S!88y a[fЎ2>ͬ2dt*^\ŵ┣a^{ri]|)X krWga!莤ȊUT}ḐwB !1!rQͪtii ZLy%@pY ÇLK:p2RJEb7& {qb}PI5EccH6|ZcVLYaTnMflYBXd&q'A#Qd}^<4;idbSŅji+ѳel7VTP` f+a"NbsE&xЩS,a"R& T.:;0WխڀF6UAUQW>*-ڲuD/d~ Z=nAm=u՘mq3]_Mw /Qsi܌:?n%UŴĭ`B)6j }uGYjK*S$hk xw S)˰V+dEDU]`kd7Eؗ$pZKX<F_ՎY9k˪g44bUm0u… 52)U<@a<֘bKHTH(S2aXËJq"Z ,`H**ݼnt#S GqKZۡ2.*2Ym].H1(]Z;WӾ"_lEC{CBcWKb2G>"q#CZ(SSo| \sƀy3`"OQ&N]\%^c\po0YǘJgP VbE34U~oe,-x<明#.s @1n}W'vc I/p%/=Lb\PBAi#0.֙bC0.sX{whc≞bR3gV4"s,1ajA69yG rVJJCj5Nh iU"# :Uc{@ҹbJ!8jd^a8ss._hJLcpñsX[ VU!Q)P`-qV(hrL58%)x24^%1Ӛ.'I P WmN1f:z2nWrŕ?ʢ'q8+KwC* 6blmPA&Ԥ닌{Ȗ7EQ ϳ 2:oL*t;^ظ&.Oފ;na0Fam' u4UnCiG'}&u37[Uc^pROYZ0DVKHIXs <\VKNԢ=[;NRKŹmi6 qP2j)ٜKQ - 5unb_I]J`?jWEX/t0Z*"+!ݕť7kon}4FVP+.[K,,{jkvρJSW euR(vŬm`Ǿb}ޕ&.0R#HI;4c;/a~뙮>ʞHOYIoB*/#(2!}7_[H=|9_"r↰kC 8H|[Zϗz5 rδP6|DGW`|\]붯o&1,Kk X:xqK sxVo B~+e.R~](hO٘u|tjՇiO=DTL&zly) #}QOC/;x1BуusP?@F/ɔ#-H))"¤@/\ K7MaP !"y'(#^RtD8+ (? 5}} q{@|7H=_>i# V"z~I!Jnu-כ@E n}0Z=HX1~_~7_> W$; =|FE>Vw#2#^9?5is&/jy% YAdUk!rSXlQYctyV1[`J.o9zô~n@m FoN{ '9wL F6l0x]zє~*d9\e FRR{k CA6ssE!汶G#<;1wbo}?̶Ĝ> #qP#b0 66y!2,,';oU.(RyB11l*ng`WLOMgbBE$guj_e՞o&TsVx92RCb >u FK=+DJ>Ě$֦%O[O[bw&ŏ?e}_???(󻼸^v]]أOww׋ο'~z g٤'U|nrJgubr, Ȇ$VitS 7%CVU@D0BkU-G] BݳJ*7G|W66 Sh<ܰ@#1?NFΔѫUQrlԝazRe75L}d&1v`fjU!j:o굠nZCt˼,VCT-90hNC cnRhq-NjMc6\vunCw{Xgi^(XftFȊO0E'awÃўM2!A$@wV);{}炇:٬1V3TY/ŗ]\7WORD_wsSg|.`ዏAc#7ׅ%ހKAudyY^yOݱ>~hF\F<<-tv亦34סjBRCR- e%+/Ha!)N;;EU*[=O/JuRvEӆEOnf܉Hu)` Ro R Nh;jJi6tt5 €? 'iMOi5 SQ deFxwKJ$.aH퓵Z RZ`CRHi FT@Ѹbb*Ysk5r*Dsnj54ki±sZas;M YmjZrk: 4-gDUlb˰>rII=)=B=d"ݍUs pUQ|]\@n2hc5ystC*w ^j <˹)k!֎+Ƞ堐?wvح& عSr[]2C"5&ڈ)y5)e;[#kHp)H2QUY^4e+n*ô)]ӱX0q~*z1o*7r¬~'o q&t%(fw+yA{v~"Uwc6PqGYπ}l@ƳXl "Fh77.ɎwOExGiѾaqMdH:#e_I 5,5;I1Qu x@0^&'r2Iɲ:氭?3B8:IC(p*r'oHaqH3|J}&v!8T^.J%65ֹ#E!eAl4VB4F`Tq+Hƽ)s{X kҺV@`[Y&GqNi?r\$@CD5.0S58g1]^bFKji@Jr:RrgBi Vs!H,yVr3 3#SY[u./8x 3z8az{YT9p霔$8[E +6yZG4hGNup(r3Phy X"/BOw]:H՗wl؀~Ӟ8`mPٞQ7 k|4xtaGN5/?]bbX^~ʮr{/~ww?cuxCd"ˍF29X9뻿vWۿ^ (?C퟈eS֚gQ!lxS5K4"ccX-*%$2Ss|u۞u#ttcs ɴ( Xn2RLd\HYʚshǺnZI6~T;%NaۖtIXfWА>>4l"'c!s*%8_ Ob@ Ze6\EF5 ʔXɭ}t'BŹ0Mp[@ļNvP0Nh7$dcl˔f f0e)M0&BM 5d$|qzYٕ#jZGhk5 T@b9!Hu  DdBtX"XaP(tn!&%rű)[D bʐI+*•)e*2[j 55c|abyA"7b50> Te(414 FE:Y Vc1z~seuu0D~ ZOV亩ɠʶd+#,Z7| w:%H79n:9WxNj?~_@XqBx;XfZG֛vVxd*)c;,ojwǥ,*LfJfd iH@ҽn;FEw#`| XDpɛ/8Ag?/`Дi;8gucF,yO`*띎O3@x&' u;)$̴l(<>/b!g1BlpDxcNJȩnX?t{JAyE^o2'y@0+3H~[ ?/'iKdS3lz燶f\Mj>2tl< 8;IN]m4&!qY]D )zĔ1ߐoрH}H6C>Y>dUPqw03%M&)i{MҵgpBC~`v!#]wz 2F8Eq%G6uLJ;,WEԆvql ~Ů fg/fk_;n% ?{6쿊a\.4?zlOqzNmaP"_٠)ۑӸ#VG, 3Cf7PU!4Ȟ&ѤzTx֢u誥z*԰sR8EGP,&x8șVeMԔz3&|Sh&x+x&xg |G"6)R%|Ga&V܄<]g4Nk\< يqCdw; l] ɯ|Tpj Cغq2'` ]q`ܙ"lȽM,e*G(S3z88EVAER A^<_6p"D9i QEv6QL"ڤ*oRSl`,( KA5^2)XpIjIܚȰ|ލ&'L[^#,/hel@} a&!ymoxۓ}u~dr* W^3ϒ>[.!w n zg%Ia_~END'oHnݚz6z8)HfN&fYNm !# ,aa'Xev[0@)t+T1\X۷C;He j ,= }]̀b{O;MC< lDM@0hʄqŌT[^G1/qvIpA Ǔ{C+N|;]$p-’Ctҽƶ=x{ڝ}'(HiBG!}B;cz#%hM \t/vwگq8Vn7VeOj  ; jN eH֣nusNd38>N/ƗM7hFKB`|[4ޡ\[Dq}FC5~ҐpFr#GY:թfcl~Xy(d4 ݽIt+~n/{]Q07fi ӓƗ2Ӆx] o,TGûhULQwֶ|*w;+.T$C/qV.PW6{.KTǗ8A2a(t"DrKfTp+eŚb%`=& `~fh@R5$ "MˀuST,e_Z"8H۷tT*Sw$p>,gc<>'?!CJ"I$ZTJpI"e/EUŘ*!BI_EnQW E!"O8*e/:?$=(\K$KyQ$d:)/,5u|/9+J:1!?7x<8)ʙMFH jηɁ]m9&*n Ýy7>5jAGQE7n1T  +q,RQ`"jQ}8JTY]k=-f/InP9t!jrmkv`\!BjrdiR7si+F1tmfc%܁C!:PQ'NvG.m~:^G!?0T+FA( NfY \\fˆ&iHCQ?ZMgmȒ;svj=Hmg7QPQ`|˨4?FPnuA|Z7n@ʄ%NFn31:7s 'YɧbqڅdʇVo(d7s[` ⿼N.f@ͽi/Jc4^.AydY4egͦNA<99D#wP$?M߷;-4VKtCcO>7ɼyOlφy+jB'N` lw0/nylqkE}YgtN;Q9\J)bBH`*HDa+B8&ґ5FHqK-I훆߭rYlWQSwiCP3 91[P^ *vwH0`\L@CPIJ!*w[XX0w8(zQ<^<PHz^ڜ{:j,P YL ӅYSOUuG)N-Rs=յ(*_oݗvI.M̐%XH2B:I0+QR$L58ֱ3¸+jw^OْjWqM?bYrwfg ,;'p&\ ݲuzu,k`\A7%~:ؗk?m+^ A }el`hg &GtdeūUhu+NY6Lˈ13ǂ9K@OL z%~yvjlѳ4?O$Y"-3: o'KO+I`z;O楎 M:GP!wדڳ??I҈0e,Q)O(e1 idR.cEuW,5"%^*HR%w#W+egyd,SX(S^?מ76/ ,%>4ΡZ -?yz/{[x?,~g@ή_ͳWyKjg}8@kWO|~(Wpy]ٗ(EUg:#s,䥏,%d4-3!/py \V+zsAv%ȋMFn?flSy''E1v$ȏ+)ƒ ~9Z̦tuEz^_ꝝ zC_|oS-vߋ:/TYG(g@֍gz#@;u~*^]?5?=Vw|h94q/9p7}~=vdyfço/!(h41Y]J~iYY~_lǾ8p1?@ُ>eEigBr7&z֟r.\ׇ/h>NݷSsb~tkw˼I7Z#vÔp6. ax߫/? {"8nLc]?kT D:!_\+jpB@4le0@:rh192O0X 䍭a0| &sQ+7p:jo{0o{oQ} @f]ȇsWqY4v~ňjοwf)*o;ÉsBM\L7&?j0aWq a:ȨH!AJ TY@EC 1R}6G $>a^ֶެkwgV>A{"? mYNS ?sZhfvyD`DOuԷk8vʼn,cJI&7 ,D)!PBZ3 JYHT$Fq,(@db2F1 0%nZkWbSjEVE]9䆀S;u`g*W`/,{,Ggӿ>KkE*ߜR2<[)ϕ;7*=Q# t?}&mCbqޞt jUWt)Ҫ+"]BM#٢eIR0!g(qlF!硌imDs*HTZGTZu(9E_I_7[:wծ=qk43[7pI*=?^t2ڍYɼƼ>>/IOvlg`\>tNs paH, -Ⱥn ۂ[܂c(ЧL@k2C29"r9"6AiȺ UiSJ:j "^iQ>R޸L*EΤ󽔖?i. &Ԗ^e¤ I82p&eP4)˰V>v~ʼŜ=ps7gs2̫:˭<&}^*Ukay؈:a0LtIh4ea61!lITq 7 'X}9#h{B(z2` ^7t`ݵ.%t# V' ,6h$,QW (1HakuJoC$ #QȳfbA0:)mUW¾ZZ<&<]jjȮZ?{Ʊ q,R]&AbKpl`X,:-l2߷$^!$MLuhmYi#BFO ,ț .* ^Io:L+ݨg%eV)@L\u1l٘fC c̱1G $fo'٘3jxŔ"YtF^'ɢ1S UA.)ܚިUwa5zF\/+bi_c%=y2V A+M$}(DhN#kLa@Tq!yH^bd1@kzNkӇC$dZ'\b 㒟++[ϰɺRXQF&BsX݋.X dV&; snoznk ]q6gvnc>=w+me]m;IY1$Bszv9&MQrL}&~> SLX^Ts?MXY '2`|}FǾZ{Ma:LOwE;=9o>{7b"oh.ي|^%t<;=wo7n}Og'h5%~Զ!|{*OttWƒ-wiΗLFk٬/?|Qz_zJ~ʑ7:gWy f{'_}'.K/]62a}(}"b!5o"bn@wwZ m2E|b6vp2˚USJvv'~TOǻOfPPKe4-Qv#&ҘYhj X"%$ ӘVí4f3N3>6LixZ-'m.Ӿ6FƋL,E ٨C&E-ɩwb X-)FMa&he@J}EXtV*mP3Plp)-i P5& U\ oTT 2:*eS_P'j-R4 L\| D`¤,AS*+pA(̀0 (ƙ,=lZr. X^1Q̪6ܙK.妺Rn&]킈9a;f`Y:=/<5]6wTeS}ͤ |]}np /5 74kTMS6ͤצ r]cF54śpJeHYJM04dcB lJANtV !4p42&^136Pc[7E[Qb(#?g `Y i(-(fLzA*jj(Te[~SM7՟L[Np-wkF4mrB$oR,#hL+AY:$"f¼UgqT2 eiUY~9hb]O Fv&43?8Ʋ1Mlje3cQ,X>2Ť%VmIϺ ,ugY~ֽugYǟAn{?2?,JFDTH7Pnll"EHUjx['^l3냬at\~%P-r_/B}[#q%.e"x'7 mVnT5snmLh_oT~V,>x?b^,a|Z9(sBg%mcIeVj MJ/BV`nDC;3t\^cKI7SA=JZu~:#8R!ҁv]mPiߐRolkLmw|+ؕ ljOݶƳm!VMTzPk_նv;꯻$=볏}[Gg| cS}[ϨNMTHUky֊9W{|;%K)eAXPˤi+q;C;64C(nҼN5<Ն#=baœBzTtPj8#TZU:C8~t  965bf b־aMMg8od )`TYz{||^ܢt"gWxPhưqB#1u[}3Gkఞz8w1JpSwo‡*2d[׬|㌾}:&?|>o}_7?~_F1.ԇ(0|89M'b(N}a;ѫ-3\bo*4d#Q/9?>?S4<oW嫓tY j85Wyп8MGUQ*O6\3Lֆv Y nnl0H}n4JkӈCTVk("j^+QVޚZ.\{vH"{Y1m,KB?太%hC6im\q@ ۴6N׍z޸7ϔq{[WV޺D4F7& L]^ުj f^X .EB@$N`Z}qWdL1Cޅ\xs5u]G^$c )TȂU$J{NI\ҧkN0T\Q*򛐶 h 9SDr5;]91%дNZT&[Y%Ph ֚*yAd $Q lljɭ$tHe5gJ>ΰIYrĂksf#H)gmH3zY/I#&L:{(q/wDRj68|T豔H_F 9y<1rvc_4>tvܵhC1cd!}J_1La+#Nҋ$cQB6Dd}cɇldE<$f.uu0rEx!m!$#$1) Ɇ%,3 8ٰQ2?~n O6Ig>;t>3':@v:  4[d]?.Szϼ"E|z=,h{VH.?|f| ߭x6bV7Js޿A|Zџ|Oo |>, ?[$ҧGoXbrx6r1m^NA bt ߫j8N67|2d:]qrPVxjJ QT a)#X?z~l.]?-?nUQ!Z"ֲ*8D"+V-`C@68kϵ}bš%4yCl7c%UTc1Gm%3Bh !0j^HȴPС8|do堊J[*&ᶃJ$jzY"'^iZ4E^BԠ说y5Kn"Wn@~+!@$(Va6U1+VʙF"WFp=89uC(jGsߋIt J]=EG/T ),a@$j$UU ުAJlF,_81⟿ȩ&V)Cv} ƫ{8Uل}CD|X5d`[5ɵ6P`p, eKOpȐFN)h;9 PN@J2nC@Ftؘ\-}|odca}L`㑻UjFVP e'VWeW}Y6N3k: %GHh&7WGN|76`kU\`ZA{ͦ AaeԱ7xp呻U_D _&A]d6YN1@`齮}ֿ C|P_#^_epmٴ93 6g V,h/%JOf]I6+D&aL,d%6rZN;93 ܥ&'7@0Y[V^doЦT+ijOsq*)- SL\\RuY+#5Or% *鉏mEO^/-O:,/D2S9Y~<~QD猰Z p$KDAr-4^aAЭ_ٮ4tTHFYb'iX7B1.Mc@fIuM <&8 d*wƈܧր䮁oaEO8G 8j5I|$'v,=]nՠ@%QҶP(!,} -%z%{{*"o8pzLTrhUhJ(TUVy#nݪA}q~s7 U9֪9O7WsD̆c,yA,&BYH%"^Ǫ\H(([5eThDCԉ1&kuW_@(TC:՘{oq3S KMRI2$c@,傓qE|3o,:C}Nsf%̓(VgZ'&XDdV&"qž'901={+C&Om8& _oub 65hUHc@24{w8&|C@6N!-H '69Fz%@7Ԙ9Pa (,[5($kd#4 I=\lXre@L9%j蜑U5r,0!PCfTcBOd"jVB͢p[1(W'a$wNw89;h".@ڪ-_|nA"eH0`,d7%~|dܢ&p.=Mܷ >(Kv.q>_[ψ-+-+_4K~FO_ +(^oA2D`>aܘnogHin^"eV"`wrybQEXdѭ(.~ł\\q!ʲRS+&%0c ,R1!E/xULw~&cA[Ku1dAUXWޝ-V _1fq06pkRѥU9Y%nl!<#/4xi,a?ݏff-=Tϛn#(t <λ~rY7>J{c: \NV>KbpBϳ;C B}D7++ (jL| ?L\|3cwbr.(on qZNN4lFRk0N W+׫&Unf{|] HfrRBI#YlgNJF->-ӥ>彛QyzsәN[ b~t OÔL"oF~l\4US^dMhzއg#7XJv^[yy#aƺS٩ t I,e_Jtzv#Ϛ':t˱5 C]98FyYS(,ltҀVTy+QV IݒX;'W tsLXt}s]e $tˮ{ bjtzW8{^WvףŎ7Ln@f6rld8˼s4rjdKJmgE7mk ۺWm{ؐ@feOt=$K&')$*}{D59R_zpYRHNo+nzR孻}:AO}yx62} oJ% 2/dPb@[Ce %+McYd]OX.[&ИkTƘQkV%~vh&*= S9ޘ|DՈGTVFכFLY9:yLΒ'<.B(V@zI˥J ynO ϵ@ʞ`tH| imPD>5s4s&ٚFrJ@}}BUUj;&2y̜黻o괁,&s7]ޥ aVpK4biw_&u:><|<=S-F=B诿[92\}p&&wVN8G&}ubZyp)-^H@3s,M^㹎Ȋ^CR0B TvviM؋_ȎiKsVvcg}֒ݙ[PMFh;:uv{?J;[WfA=2G/:WJ9tϗI|AƧrc?e]g,A&ACapi̟T>eU᥯*U\UXΩ 8H*u<i2hL. dڀT#c"[lSvV9m'ƨg~"AnNVu&~&mw۾a-+x)u4EJC*e:LJB閶TYn/_#GӂTD5,)\u 2!cqp4ͿD틺iE6^PGhϒVx0-y?oA''CI#Goc*| j㩝hՍډ#dӵe~ti94P Xdq#LG[ORYZ&7MG\U:bDmҶFy&*:l|\zl6ֱd#{L إ's9䄌EKZY^5;\ij8~Ap;YR=E>x9r)T28 V% #)^t we΃>sØsLCeeﳸ0%M6"Ͼըr"㉾Zw+59ݞ\0-۝AD4R3jBSe_p\DJF\([=\8B@tnW Oyg$nHoǚlG URGH2ɞʉV@5 zp%,p vd>l_OUQtȎ[޽oo?rRL߄N{x'cjڻ~߭ln"`:Gǻ@Wiz=N/x߾>ȁyoU>t8&٨V`5I?Mgzc<]h\# cIФ ;bzZ1 tiAh?r)Bz2cc<=+ !DԳSX^&`JG -x= ː|K%HK8 FLT.NT.NS)M 砺 T5F*RZ =.LLb ֝J@-^0.EdD璽vz^,`Xo$5p8a&ܐ8@A?ʰjlkǵ]N6i_oy/l],:%8*2Y9ңq]vpL٦h89`8R^ >Rlzic㌌ݖq[Iq6[27/{r8|s o52~qmtB4@nR}4A׎^4`mɣ/A4~? ݣФO/i _] 6a?7c G3|wH;Gq`ָw \xW}:WQq';{ ,_3xӔ6ͷÓ\߿_zw?ǍଥuK%ŕKa +-u-݂uVhh  n)o ]KeA3_?jt۝C cs-eKOmۉmGl {ov+|$C/ u:מlv栙\zsӹkNo޶AC=M#亿pޓ~^ t r_o=2 bn1졿107 (8|7 } 7^\x4CBkR@𕝾(CA Wo s7lZt9iﲇ˟av,g{crOL/÷pm;/Fˍg~_sY_D'7([ Th[_Mw z|Rprd#} ֊V7@5KЍ3L)dP~r(WFQfc}9c{u4Lq̳G׍7ɯ=`EZǠ`w prrwo7/bP{>nO^9m;z|}~y#| ]=w'go_/:ޗNtYcb:z{ Q7ވ RK .i 7ޒ=TW61m^$䡌L/`!o#>Y}ǟ@aׇܧm3Z:%Zvv;߸{}6u<l{CN%_ wێG9`9}ۿͩ"}Fm|23Q!W1y`.ͬJ!9nr68p,`Pg$ʞ}`MtsS{Mz13gK?vQNhtm:6my]Z.IX v>(Uic)$5 #C"OQrG&zǡzI}iaUO2`<ןZ,{\/ )wuWf)V-seu&3$ܭV4:uK0`:QH}tzYy00Š$` pA0.pN9r9>p]N{CtnQg @퍆\pO&.PQ<._rFf$cˠ9>(~ .~1׋b8X|.r{SܑNpq#YEPjX 682:Hn )CHL@E *JJmkCe AaG,i#MXCQXU,T);2Pܓ+eҠxp $ԂJF (Z<uA'Qngz[pQ<+Oy45d"ȄWf)tJHT-ǜ6!]8vgQQ`YGn6\U>ՆYN ;tpߞP*>x!ҺtL7K 4^й!}?`M= A:̣2kcUq,a)4p楽ށ^}ӿfҴ}uՏ wݚr#}!H%K9m"* b}EWUW: XMA?,!yZQ)yH)WWA`rlih$h2: RiNgw>!+ R{6A*KEJ=6M| UG ;F|[(t;vHJ=Է ν"mzY=YBDYNh?rpL6̄,B?ٟ@rgailh n}U'B+˂,?gÒ27U |`휠 }M(FZg:BIyNb~X ME{S.sq%Z^ 6OU\_bRuU V#UK.tsATբ~oήA M̹Т(sL$:?D: Wr@M^YJ%p޹f)6<粆_xW g{#mY2O}m€ּXBbSi/soЎ& r pƶBa8̶BG*_/:[Np.S}n@'00>޻SEN1=o͜`-޻ZZLYta7mF& U{37|eZaESZ넥x41 r#K͵w7ҕ#= S 53s_vcHL]vnݐ=OW X#f\V[&_UL2=M:o83p4Z]@&>*BP$C/9o)&Z/t.`$ Ƣ>| ?ц hӺsQRVF+b&нID8|6d[_ $N$g{*0f|o-$APx89C~ա 9 ֶUrB(]D%D5MTfcD}g5Fm>SB;xnZJ Kj[f˯sj䮽ʞZM9+aNP}xaF^r<\TF״cfgS#k"ދoԙ]a4XWg~Syg@ A.Cy~y]_Zg2jW+/$u`yt8;{6 Hӿ'wìxI|C?Maseme]~YpYu_gu:1yu>/,y]ЕsPrlZLRh kجNfGְQ4L'Ƽe䶕k>?8>=|*ڒ^:u1-vԫϴ]pżKw~~\~ # 18pJ`L8ԃq}L8G GǠ@G\yނ,++4*hȒx,۰i%,pɀ<0aqcìt=m-=pp;ћÖM5p:R,*0gT14wY)eS{d(}/dPA@TہL2^%÷aɬl+0:Oq5w2WW6of .I+&\J"E`.Ʉwݫ>>ӽ?Da8dM9w#7+{q˫$'F)F4% Ɓ?Ejt/XƐ':7.K"{[gfA:4^7Bn0scF j04/c*lydP&@ R9V޽Œpo'||?вA\2k/^>5@:B5ȧe0{يrl|fv k={Ћ *_#j%Z5קjqvd.<9ٽ-,,RJ!'($Υj2תQ,5FGi [MDUcqLUi3JUfWLFs[f8v 5e20zHUF@hI|[m' `'|&f>ɬOYgv52d0k~ь1%pF(mݝ(+{?Ogq%-hvߖJl2yByHl|9g?iߟ[vYn=+ۭgۭ;ͺl2UTֻP*O,G+,d%%QL"I',ϗžz>qA(ʹ(͟F"z><~",UlQPQO#(퀣/xRM UT)%221Egu`CT[n̺3u2Q5dCzeMB WQi3:!_ h$4֓[ID)C.Z}4q; }jJjF}C+`#tڎe[_/>^>oqgK/7?ƞ~m׎T* `ȔSTq8:NQRh!8X7RpOb?Dj|OY\JrV"e?d.L@d4 4  ɶZEJlJ-]pa>|o İ.J6O_~s׸BM2Vr_j|.?\fi5No4OaL?P18ZLMCOR|$}|[."Yzv:_P0@=T]BZzW1ᜇy@/n;iN\j\}I׌ȑ>(Xr |/*11L"$lrdo H_AA쳧kCVT<:M$F/9 }ٰ5hQ"JU魵8)H0eY V0RpJZToP&PdH$J %(N/6/+aS+NՙڰY0!(ciU%nj,w9D:'@"vp{t*'˲"] lśO|刘G=Iw!PEEL2} R #; P/ 9PN<}L =O"D+nNMNZ+GhuzF: L<]0k?4&@NWG6匵 nGU\tl 7gWō;? J3Yº% .>mCj^+tgՓ-YXuiP#CF[~Mj;q9N)JwtVʺ%.>i\)}R.>m'J qtX5GWi'9np8J%ZRKAn1XX2!G͗N>%̠"+ ̕kC&P UޛӞ{%ֺ.˙{nÇ^YRyh4%q.u׳z~<%ѣp7{@*NR(R!*"s!SE]8f0ig%;ˁ]'ys2<8[A\p먟GA=Uw\T*ݕW綒Q[s"U+Я{qbJ"Ļ;O8:ŒsSIJMH2,Ge!HɄD9 kҧFmnXÝ\ ߾yquBz.脤H}B}V.+"Ƌ\#óJ\UAȨ%$ RtyX $ qL:a"HI\ȓJPqFXsRF)D-k3J]6<<1XgVhƱ֝,xK2"K80+`#sYQ -rSN]bI*(5 !2bdx@klcz;/w(IpwciuV*[\oF] Yc!;ŝO}|z9ZٯTBʥ(VL sa'ۂS Ri1SX37A)naLp)݈X\*#xnwLc*w TT )e3&lm2㵀ܚyx}s\c[LzSNc3Ule?GYaʔ`\*f;i 'Уs )B$ e̎uh.g 1<% YCB4[/I,ΏX<^ñLbe8bظ8aEBm\l _|` Q1-5Y Nv9QKӻap<;@wY?qBhBk,- RfeH.qW])Ba*nnݟz[ԝL=gY?;(3<񩿨.~OŞln-ά1S_noέͪbs[V}vʳ{D՜Dǭ-(qRg庄w j~NԇK_x}QC. {E^ mVӍr_[o9ΡcFXb aReV,trGT]KG^Q[|ӟ^|?zۼױ`,6m3ņcegPg: M"L'fߟ@տO6oaVC_'X$9m蝵]OnV6.|Ɠ5?;b<#*2dM>ڻP'yُ' &bA1INGӌ wAR"1e8R =S h hM@%QXI((IHŤ3 FIEBBSFKT@CtF%,f?!EGh@q&8eЊJrkE.HpiQD D16644)%8;>ćQ\nݗM]TjbQ#!&Xf*@&$i fvR\9׬~ђd(]Exwl2fvty- I KSZbEE&U^9bQcRPa["b#*EU ]YXJ= /4ǼjюrfGw!{rCvMvçS$jKdv{'kIi9߽j:E\᝱Z:?ޘM&3{ǀoa\ήnoZ(B$'0ɦS UKDE4hN ;v/("+ ժv@Q`ݦ_[ccmK+XPza7ŷ:A;*Ցv4ݥAvd\FGbɺX4"ĺi?6zE&Ht&W -xZ U԰Uwb0炻XoT#x+:FXi-TCv:~ S ?utW VY?3 19rPsl8΄RQ-NSL a]t P{'Yã+ "s62"?Gـo,Crb&g411&}ΐ$L4!W,U<$9?C%ܘe? Z(y Rcnw=D(>{7Q8f]3}[<FSEUǰ_7BqҹxF0KQگdϗ7On|YQ!0wM뭡d[J,O,&N:p$F63 {Bl_rW߽r[ֳZZ]_n>@`h2ol$}iN_"ѽ霬59EH'KB{R̝Bk,- RfeTqQO81#iE.fp(G~(('W!Cw|yQm/+ AR%ǿcPY]E" {JG$YKq1WNS[Gr=6W\W*>ms9MkרzYn+S:p[8~/o<#F LVԇJv_.޴ %RU) @',9|Xcb1_*rLbz3"'*սuˉYf_V6'Oיuget퇯=5ؗ*d ;Tډ_9{B'$ A\idf +B+s)B~& nG}w:>]Z+kbȆ+*)+"&)fOX+t@~*z`@)Ȝ!IŚdD IzԃUE-]rZ]61 0/=W}9Jr뤼Yp""}: LeQ1>|=%d)ez9`)O}q|9Zn9xi\PK}R_V$Ypq(,CQkA-S.JŃ Q1ň:{uܲ#yYX'c;2(.f,a!&6M拻Om*Uξ}5^gmӯ!W۶w^PZ9q?L7J: tcBQʌ,B|, 0*Be%TqA  Rsqt!ACSƉg`9t,*N¤Gb a%Vٌ͵cb+)͋P2IS"H3ADn&Ǝ% XS*Jb6Vo@@kQuenTEpb7~DݪH#y)#yD hˬl]w04*@fp;Y)H.ɥU߂g>xrK+zo;xexg75V{I S Tb<֗8OKl]?f+waaE}TH`mhJK.2Yiъ61> -v||Vze5~a߯G]onƽ0Dۃ&0|%/8Wgt^G6ߢ8| ^kˉ-E{ボ+U&֠I^kJ>X5)l[և cB],{0F#*rr.GTarֲM]^eu!8Spk fL T *,]T T +ڔBm>+F$Gt\ʑHۘp*d-%\{Yru0*\S'P;iRpz~K' bJa<(h֢R]LVcbT'yh)Ƭ>ef^̊ډ.*;/dr]>gvHB/^d{,EAD_--$TS4HqșDⰻU]]Ig3I5D wMp^ fwbȍ9GYҰm*{iHys=,͘.÷tROf Wۡa/8HfХߋ>?p7goK5}"ѾקG毫O~" ۣA'9)~9?]~zs1fRsO\Y ehp&ӟl֛Y*)Hvy=pY抒Sx5o./ib#Lfe9>ZO\Ƌޡwբ2K Kkn+C~2=O[jqհcRD_ՌǏ *Lw6e?:5(jbGwF+\dA QoUWQ(YsȎ4Giuů7sU)u#nGyjj'by ZnYZ}[|6 ڠlq%'c]{|tJ`[TÆ]p:lb ԁ/s|Hr))K댩qslv8F>[' l{BK.ttg-`ryj7p(FwZUzw\A4?X؋Owjbl/.4i6ߑ:NHVvS)8.%hqYP?OY-$9=nK=g4ݷ6B?9SsrT8`0\1CĘxfeڔkqYmDf5('2W1,"CzIJ1:7@2xC#8&im¿[4OfM5<9+lsAmῌ<=zٍߜoJhٛm^ۂPs=P1uCClՖQ}6"^2f.;(OQ@ &u8xX{b V:vuYo|u5][kCdnJ/uv`s|Rم1]:8!GF*_^|s 5 Q'%e$:8[* R.e=m TgW v}mYge.ԯ.}2,&S ^T<5҅e~jP3::maki-ACwPe`dOi+LBncKNPTH?!TEk!SMZi VRH _6j %}T|=e); h$b{+E:>7h$͏,)\"W~- ط?TA)U4Zf uL@oO_1=]~J)Qp(RI.}sYrӳo-n9X0@>mn2lܲܶ!Raͣ KGۍZGwWp(EcvûV2aUn~W; dnjbf$c(u@Nyd,^ib 1mZطzΤ9*0h <9pArNH˜! aJ)oS: N)դ2FrO2xH#y"ܘV^k'킰5'wxk ڞw0@!m͵GM0dOYD,)8"uKc `E:<1L/fMm0(5K,sMz4,5xZmH(TRӯD3^ qGC KZ *Z7$ znd1@ui A>b*Yț=Rm4Pߒqip5Npt E1GM$?R$CH%+.Gޑ "Y 跭PI~:w2<Γ&HO*PN> +~ FQQt2NA.ZwdUPJ)ژ!mxA/0#+7} >ܤm3}kO̸%ZY^y$(6o\tOV^J:yF`@xaUB;8JSM޺J iXd ݾٯ;5A4,3wy5&# x !JryAVuV@x3CEbx$#Y2""{޽{ίkd9cBʒ噊I*^ҡs׈X>JE!IyYj'bfV[Jr(FOE"HQ)Jd(aNr6d̺9 ݎWڋP/ j--c7(uDwOS&t ZZ}\kl.όs hDa5[TXN(Ƥ8g0QS0WShZ9(I+ # .rNkd 4:s]^.a.W$A*iE/NJL  @[zp@Q"Fd "ku'HoX"C.\1>JH<F][s[7+,fkUyHT'8 ]Ag "QEI,*:BDA`-~))1F!d z;xHU{A+Jm/J`WKRRf(y-c m%AXǷA9…GҀG|*+Ʀw"J #<%ؕlf,X)%c8#$y}pE1هh . oԕxS]f_P)*s&)GlWns:*%aGe+L+6$Xn4G”aWhE<D3peɞ|Lm+nʸuT0[1mzowW*ٕYWr ?_ֱv_z;_%~?׷rO+ r`I&*0l ]Y){HI, Y-Oȷ]~SV\Tuʜ7N/njKю/Ә[I3W4hUͺR8!u=z 8$Z yF+ y6El 3DV<#{Tl N[D]9B9u9ƞWNu P.vw u'9su4'aINn~1J_FmM rv?fW$]ߟw_r~9_gp~__ǚL;?;;X&q寳GbW{5PEU1U=qpMп80}GwT3>d|lܘXbYr~]oV|n? 3Va41֙⃳!3RA/^R//^C O(`\J /@[k OԘ*5ܛƊ̑~7()hAxI,XeAb$JF(J݀ʒhUIɈ)$~E~Q{7ͯzxqr'q_ջc+'Z alR`pJt #k!5WYVȸ´Rue ˱vSu0=n욚T3lR X(Rwy,*Pg*yj {L"!^\T3E\NZY&}DZԓYEr/>& Q|_E@+|[:avX+x H)u4뱼]G $Ç\|\*Xq,$c_HTdGAOwc1.7j H:_f'%`l>lV>=y!BP8#rAIY2VT4ޡGtpRIT-.rQ1rZob/m oMb,UE#/ΟIIxː]`=nvksfIG9c"XE8Xp*/(r ( (vIǤdѮwu&//׏I,fuks>>L[N6Vv%r`~Ɯ#X57|͚/K]..#l3\ZQ9FYA%.1ZG3*$vX`FS/)E6)_2t1*9)4msF"/^:ĽQ {|) Ph" }5::gV +\ݚ=Piv + Mk_9zW]L;T?sZRwpqw5EysշLrXr:q0zx;!>ƉvhLjݖ5Qjjú>K܀FklaVJMDts ¬3'*%^Xx)zȲ/O<`j&bDj;E!5qxdT!Sz#ciղĦAgoJY?#`u%(:VHys,5V'b"2, w)4<1F;,$SeQ/ǦZ#<:#d`#{.e,talQFQ/&+ёJ\lbJ~~EZojcVu]'eŦz[,/]YMy;#,a)i@x$+7K$,JlE9sRS4.ˋ6Ŝt퓟rjږ8Ze"iTuSʤ#sJ~"V8wY1Ie-VF^zwvs=[,o?n,oXy,2_ﱅn9nA'ZݛWߌUXHI+\ŰK,Qbi^eZJKħ;2{qh|܋gUE&x+#eAU X0ܰR1iL xƒ5+M璸}` ^w-Zl])wyJX)S 8mx ,l]{wSd@hspR+{JIK2i=%xDOS,QVcRw7FnGѕeg HdOѺ4:ߨZ 7*KVָj}~V ͏cXY0˛WMk  cT`V/gˋ%O;[]j,JZ_K?Q~Q]DwEUm/g?z \g\G sl8gѽ6D9dgc*Ng.P 0[?Yw~K4HaTOQ~#x#ժY{﮺mIѯn._7/f {r^cO3K4:/7TTq y q03HMFO7f擙zD (z5Xۭ#'.ҵN%Y*WҽWP5j{=+K8\^9^9,6ѸWh:"KZ| 's KKGr(bۘCri;zpn?ǜppu-5w}bX<'n {VKCԤxm]5)pRTykK7eJK%IygZ#i[j#NV jQt-{u$ஈ$]ՉH( rƀTX_)nCPAY]R[p,*Kk oDDJDho2_qF*cqZhC)gj` j : Ɓn=8T1,'8gV Xrg )-уd.kqTO@`[ִgoC>:R2^6"\ }I4wq#pH^{q(\R8gsbDW8)dɀf ;tW:^8&\@Ε'-A#_pB*><{u-ǵ`KDBTDhĬYPYI ځV𔀍+n@T,Rf^]) XW*¼4%$s 5ʢXy Tl*EJ+R*JJΌE@atZbU $lt"Q 7bÚs00 J l\Ö#Eac!@D2A |ʼĚ!I%>:iVb`Ryh(F7dB zAe :UӒ@Tϱ/][RAy=TlVMlh,Mdc:Eޙ֙VdeS@ YHY&lsF *sΕ6 u ;jr ER6d5"5(t S2m ``\g"rNLRPCA mƴNBf(P.w2-2/XWjaJPB^_zR.gfXLj͍YĠĀ7RF,v '%U=]f *wP0LsLSstaT h5̆tבJS@4!A6B7S'H3ړdXAd \HZoc+ey&Dg)qeIi 7Ej>@SwņP7D[΅Ҝ١V6Իl"½YaߑiUVSKni_su- < g:0}N1d fR)69A>R$)MY.CS0˥Cn̹Gzy^XZ;iJkCK1%sE)rh!,8-lpSaRK-(,˔tV4GyXF S@g.-4:>mÌ;, ) 2rOL^ЌQQ0P0Υ, \ Zɻl$sE+pW2ïZc4t~4sɘUJy<0Fѡ YDodP ȎMP8O]l}ϭ- kyK9`zNWAY]mWśn'`a\4[r'sRtJg59̞4.*]1A.ҙo^3nHԑIQ4E i",HMH:$AQI-o?Ԕ @IQ\ACPА#5YVFQ̕Ƀhytmҥj/E)r=1TAFzQ·K^d.D*{$lAVQnܶ47)XxRV$ ȡV$"4*}Ejت^eB.9zz6d˻ y6tDY6_7AldF~7 bzs=ɂHhgm- aʉ~yï i%ߥ碠Rt\7W#?̗R/uD/WԐ"{L43'ȵ f<ɖod_ħD:QKY1 As|:I51Sf\ %Xy lKiF5RqYX6ghϻ؟FB=989jWw8RLтof'7X|G痣kr0]D7ϝK%p]YQ,SVLSmS@"Pʱ SF,'θ4dtBEgGZ-v*'7ч䐠۸tm#R1Y:dEi:^*֕Vjl魣1NpK!P`)f sMB؅?䐦:fl klz@\cw*eJ"-u U`I ʹbCB[kux ]K^*-C+yĹv׊ift"M&,*`Bvͨi4J Ne,˜[j3(2άtM`ʲ[L-lEI8|˺lQ T;7d Al6ݐ'U|{R{t댨rI&5N|N cPRT Jǿ5AZr;D:dITڅ>c+8tz OݣB̢9\AQ|Fq*Fqñ5fRVfV.+=86<V)k+E<3G fVɜ7. ~_wmѝSؿ> $dJh=Zm4] 1@.d* Xiϋ/ iʸyt2TQEJ6"ce: Mna8E~qd#^tx]3~O)z_G.QE*q}6}gƵs+z;KEҫטϧ7/ˢi9ik@QWCvs3WU}.eėj-]@J4[:~d7+ tֱlr/whk6_4º=&Ňdb첑$׾3rY#5.mldS +`oD.$3510uG,۴\Yʋ"MF@/k!Ʋ>E1`7U59`P=5M?5qiYG⩀Jn ?2v Sf7ZX<<+2'`vƧD@O)jDC:~%7a;->0#I5NRaRd> jzlfSnSnT-9cm흲n[V ݱiKmZ~u}.\Ojw/5U1>wPس H޾ TO rW"%[ v] O; yҜQs=F ?ޙU [1!5?\ųs<ϖO\۽plųmU<( )E{Y;35L4 =PE}\[[ճmUFR o8yo/,E$z#_RrXB""t)a6tNf(ѷyŤ(jc\K;8KH2܊K K٩ 6`k E秫6~,~n#`%ѵ^L8BrЩlU s/K&*;r U2ź)H`}]T8M R>%7'G~w5)c9TKXBVu}!A[c])ObRRB]5J~w)k D4%L4Om. ͼ_&ZY@5+. UC.8-1ח3bk{{6dPn䚞 &]\m:`ãB!݋0kM}cNlK Z= ƗR+7PQٟxPCp~.](>ɞo8%d)}w+ouW'O ‚RA-pl{s4n57^L.u0'2twFOⴚ??gU*~o4eߎ᳷a}L\Z BZ6R-39Ϝ;H=?dC'{{X|+i/OƓC=JڹAnq4**_[i Ӻ,z_T25I\근5`r+40û3qc+[9KBi'g1N\*6X9Q1geQz ^'yH~= sZ9 #=,Ōa.RY)ra0 4xYࢽ鉖CV9\~ɾ(vJAIAS:| /gr'aZ}tClMw6{T_ۇѕa t)X*>OARnb-V70^Utf~Dzu'F+wjt+P(*€ᆩJ>;Bh`KRTVr>Z0T*YwCxP `!*V%6KB˪VF^ G]Ŕ:FòV.miݸoO5LFw]S ȓ O2qķ~1}J,en3ZT*$'BTåIҜNfa3<!Y]{k=v"8gvJ'Rh/TLX?@JЛ ?&x bŦq%^;y2@;WL3[;u╞ ؆3Yw͹nuN:ѷ5zXt95Y1m6g`R;C3ZM7A0aJ&TwkKX>oޣ+Iˡ*gKa*qMyTG֝G(c(3-=lhjQLir=π6CNUT#iHӳN.D65V22=WxtH#OOVp;ÎV-WDD:A@^  t@4FiPwfyrȬ _[_ 9kstLqPٲ9EZk Z3gǑ6 cRE# uhsSBD ivJs:@ބ2%N\Pf.NO0xaj!CNdx2Hh .`71{nҪ!I/Q ҩ:r{@(h!Nn??J]/]&7{N Q̛1ogMNܘ+\y7ʻ1We7f~?~qsp7yUQyfE# 5F<fc @EoH ћOs%km"a(!zs? Hz \!ETL-}fVI>BfZ! `ҺkPh8>.]3MK9-eU=gN ]eD̗GJafKbe;TŠQSdgEƬ!9˴HA3 XPp#\`EDkՕl@)v|qT&q2N`~H" 48s1䔆PӴ_u*@l(d饋vi2L+J):aEbX+%W+>o͔P@]OP&d)jM Ev!.qg2E%ZOFS .KS!gh]5M)ouρsRLJm?>NDB TM˵2A&,cLq0 І:Iٹ=P t4)z,ZhE-;"Ir2 v¦d)W`|5_;T]:Y1m,Ga .VN)wz/i IYT#iE!$UFZkEj94rWtaP\I} kI $sx. H9 ";s4N۞(]J%K;M 4cI!CxRm('p_CLKȩɝ絟wQƯ,yQ?I)a|s\y4U^Y%(A1 ,܀?л@AO3V-5MSmg8!D,84HJE$7ESʹ[@mSV9CT0#5գa62Z[ںC -r(.`%A2,/AfV\E T(*k90Rκ̱)\+0+QW\.0kihi ?vw{ч+T*~$Ra>.  /O?x[*P/=~~.?|~V`2Ż}konߞ1B彿AzWǓb /o5Bٯ!of?:;N x'pmhA=}wsahzf>*ps|uSZ%j8իtY5JyG5`syUJ8)ǝ!}ՠ ti@hti7 iT ns2:ګ $<󅲧~Xdz o}GE@ztQU( i=ܛ7[N4A:;~?6\}RV_' pӟ?ӭae 4.yp>җ?Zg?_nwxe-PcI򈺧{8JBao8g0jʑ?yMjZ銅{T/]ν}z>|\ Xbn/ ]Q{F|yu=bsHHp*`w|`FD 2c(ۑ=#**z1l0y8js*TGCB>!WD I u6k4W-CE($1X!5$ZTʯ6JP4ݢĩԲ10q l䨨`|oᤊO؁錴dvDSQ]637IVP`rBQQ rj@ . /R57""/܎z0)qx w澛}c 5ƸC 2.~*>I#9Gs;L#;S]2 3[,+wwL[a!4fl"j΅ x pLF{ݍ-r߼S-hGƳ ?FIZzT/!K)>&^`ENjY zL^v$CfIp+q!ޚq0g{C6FpݨӭN9woމ/0o+<> K.jx$"ZDQ;bǫWր/],}4A\v$*L9cBS&G/Ftk]ugp3 d%1(lxƟ@@Sݯ<&}Y,~<ǎ[{}z3tUT5w9b[ۊ2EtNl LΆhpZu16 -َD8ZKJD;up-T%Ҳ^4-EC7*!aS6՞Y?)[n=2v_´ٖ^yA.gEij-ThZD!3^|lde{I|% 3$嫎jZaYTŸ ֛,,p׳z7{,mEc`/)[Km) (&M읕w~V^;?+ܝY1C=caŚkMTcȦ.[w= 84 <xۃ0);8jN#3mXsڟ><થ^aÚqx9}Fero:ɡUzݣ):v}`Z}wMW8 ùN.3|eܾS@Gh;ƙfom]v.Pڠn>:@ӕuTKc'v0NRQc 29߿_—07 >hō/wO30U@ Dž&4#Tk6n\}vߘb' No9TNRD̴Z냍`#ohGh`^YE36J?Orf6}*ogzXN"_*ƭ2\0+oq8~]xl-.GgӏgAMg;ej4t:OqQ [Ε1f3x ' !V\ ӿcJd̀w3wȾ'nJHӣL`BоɮE4*jwy^#rUYg)k1"%%haK/3`  Tv K1XuY /aVHb13+aǚw !H ˽ҥzϭL%)AS+1c"`.T ;F<ҒYcwǖ!ʤZr1f%"M`0`Lo>'E|w>,`I,"6XO . 6ԋayC79wxs&g#SYpc8g`2a8sV U*U߽I Xэ쯯  Q9LHwNNIt^11 ֧_0*xilaUy~$-])]H˗;?DX=?)}neML))7岚rw7M 'D}?<~ku~` eYU;'M)'"u|J[qu53@'Պv5:WٜS?10"B'\htQa1|7k#6ׁ :nva@aa@q&cr>r:RMM z-`JTV2{j<2J ~RHl!ӂ; G&Wd%[=Wdai]k6nyy;^ >O2ON0O,hkrHk0d+0҆( C^a@1z$8]ҷt0=J$ jU%'XfD 5# ߱wS7 w0͜=`E)GLפisRxcɽo` .Tx S#D)MfB[!ൊٻ)5Փ֝Ӧ3l`kM.z^A>cEgtn bL:AϫNv<pEMroT*u/ >x̨xo9dN1s;WOkBb͛ƮblʡRiJ:z˪ynl5p"iIeƠEM>y:Ow$kE]?qHAR0r1|ș;VR$,Qjݸ㙳IPw5UJsѸ#{3dM;w=rNvkM *czDv\ZO_e 6iK7 Z ~֤mjTLl&Y5W 1!II'qxuղG~p-*N(_N E Pp,8 ص\&sp[m!kyЊc) tUxD18:XKUP;7"M w38-4Ï0| 3BD4%;7\ f riiݍ/d0 vC\Qd(RPkn7Zٵ {*]v\Y&kVvqzi|صs^;ªЛ_M\ _ CLnTbʱPEd4SSln2KYw ǹѭ"R]f_\}W%/=JŹudTGG9gWd5{g]1l1薹%B} SK74Ckkc¹ ̕"܃pssK73aڣKG4웅RmQFiDᨉ-H:V) -=hTSptN__ĨSLH Jc1РeW%$%6R$*H{y: 3dwoc}Rapv17W9(izg%!mE @.X%8u~Ș;O<:EFנU9'S9 c i4͛WƊ-*(LU=H%t\YF ;ٔPZu*y ΒM<BnnIu/䢴tq:LlJombR_$~tS1Vj u J̳t5/}:v1 aF(ʊ'wQ |ɹג׈X% ԧ 4=20RDB% b3w7Xex̓=5Wp _m[M>1eɇtę, ;4$zH6DI_ T!?uW A\iz*NaɞRzkʯ, c>,$0v ("lњڌ&4:S\ڏvuSDkR(Ⱥ@8Z@J \^+bXYPs-;v/c*VўYn.Qխ8z*B*ͪW[@+Vf_P3(H<pSNvXǛ370"3R;̳wYFk$ja%;TKm EU,/չ&)mNSD8:ϹY{ƥruM幺f~1 \ҿ)D.WžR~1j0Qs^ټ cآ';(Bw`ASQaBQ ǶEZ˅OޔEFhLXuA%;.-;77e1 '̞WsfRlK.AgSb2J`~*VmV*DD?Kozx;ڬrTHEYS*<{jj9'9M~NAOYR p¾)h21JA㏨@\SVB9Wq ޿* |Xᚾx8lNY MG)>@r#- F«=W{ h+y(:$ KÎZՁJRSPy~F*SD0ʠ\.4kc~Jb 6uJoC&" 8k[N5 ZSeТmrmШ6F#:D&ra_Bw #Z[  Z{.@ >-p )mPY<Yά]/:ZT'7)Mb&ۯȝwݥ"b.ngnD\:ɗj5 Sv̬QYgD*>܏6Ei9b BH TcL!! +|$]9Q9 WMTz1?m-' jR⏪Lȳ!%UrꥎG;h+}T-+cH%d?%A@rG{ ۻa;aP` k_EAi0qHUz,Huc) {qB ^sBe$0mƓ`.2#^̫]VFl1f3̥h+dPi@ @8Hx@T(&BVWP:/MYWg +ߖg;*Bh3:KRx*F D1{2HgT:,{?! " %m&'7='t!hsv+a+xƙ.ȣQZpضp-OѵVҶhHՆ}X~bxJeC Gp\3 ڕږrbգYyӱrJ0X6uux^nQpF)# O { ֒5I5ܫ'φVʨǩOC|tt9֬"(\>Ud>|ZǽC;4]껽jo5ŋ:?ݛO woޟe-wϘ󛟝OOޘ,4M?1߮cbĉd_Nn{'\69>&ksvG:{fOp>tݭuoGs:L=sdzclHM[ 7?EOձJˑשGeNG+0x`zw v|zpeq%)@Hu"o0ӂ~ 1l4Lĺ$6H&#(Q X3msW_L]J1RMuKl0mRwYS³UR\KKۨZIKHz?  c`GXa ( A`(b0RV+2c(!|;KU9v2h1J>(%ReM֛oZ)Lx@9-cot.* D †rY2X4Y %Q ȓ9ߝ *ahպ,J@H ­,bh !Qo_mS#_/WN8kƪKi&~ޕ]9Mizjm?c]ڴ@!Ņm#(Uʪ%\HTA%Tń6-^& *5ЮM2e@ THp(2+QQER"#j!.C,J_qFJ1qgdwf?~. = IU¤sJe3j.i{9ƥRSP=\* y6/$#TI[WΗ_Fߌp#mAs/Ao KH{<Y {ʎ[ I *x n`&RbS"]^FQBbJ.TfĭC#t =JUjI"L=WY!> !d`DZCBHR? 6ӊw>T@V^/d69$,8m%bmʺCB0fϩesƖCD6տnC1[M տ}H1H~ww\ h@ .aiڷ!tyx&l:STfq4JAa_,X{`_DOC6 \Kb9zKB{M%̐sWhcZ1R;#hQT7HC`HL?;!BA$HdP ɁB`#Nd9 ; e-\ nBpwvWαs4TC"*Y t?O'V 8ۤc1^ȚLv䊹=Fٹs= ǣ/gW`%JE5SB cH)C(=,1=QrݻTP*_H4&ڟL)`7]}dȿV(IilTtLѻp*Idx5~mߎFkHl {u)"phZ)21|E|E`5eqJBb0 \@*Q́flEeLQR2T^ɚ m;}5ݚ(Ҏ:mNLZE1$hq yìP%Ot{lL=Iͣq&v.1AhJNV: Eq/A^@ Y0dž<] ƤR1E$(` gS!RN7A.Є:D(c;\}eNlL]_TIұعQݍ3IٙئptG;ܿ}:>$)RїxFTiabmva; @) eҔ`By @Ԩ$@l9+ϘKJPL{(Fa$LۿZ2Eh)r׶Pm X۪BM yhb Ej[URK&9Nj Tv{cf-d(c(GBBk>PRݗ Nhu ݷpڸTvEcc0'Z8]MM<¼`Օ⟬L ;]UQW~ Z%5܋\p?nW? t^w1|^ofz-В_x݄H<,F" fiTI@Srv>f r3imY!a)M:0zuP,*D:gZG; w5(ԋ;@߃7;sJ+׆f8p#K'y"Ncƻ첤Nh! Z3G9oB{C Bƶ'|cc@lVg ůxvM;*YXQގmS 6J߆,eKk$4ZI*`x7)10C3I1KIe?c! =|\f23נNga9ՖwKes xkhHI{(źʗ$Ζܖ@&h|Wr3v<%F"\Fϫ۩|f7AR .Gj>s#!ȸQ|b$#^t^·ˆ9rHfa¯a½ YЯYsݗ/88/ŏ?.DϘ/ ϭR5olJMoJ7vx-G߷omO||ߧxncrn}7_ſ/mQ=;k-Z cik毻)k3-o^fxJx@%L9%405BV>xeS1>a[ރoM )qn)tG!Ҝ *8ѹǔr3)x40>j$&@5!aŨ9%L6'EBri"|bA(g(-BRRiQ *c1BpZ~Tq$b )iXaN \1qj#;/$x<9Ӱf*Z֌`1Ba֡CJ(F^Ac(i Tλ1܀;i['YAKۿoR?h(=bTnw{l2VIlՃvL՛m$'G@ˣf-L9h@jiզT̓&nba! B`6pMh`Q0'"8VݺN%fZ*O8B0> _wq5XejtRS&ԵC'7-bS ܌_%*`­~%Q`r{"0:H-@oD wq f@7O*!;>FJ>ɔ[) r#,xʠW EԽ4s-SVB7hK,XC 8&pԔ$oc_%%;^o{m} yk^u{jL:ۗ_ڮ0`=v `m@HS>PL]ȕjTȚN8#z,۰!ET#x{dO5H{͑d3R%XMLs,a‎lXxfc1 Ҕ%pJN5=118jTY*|먶Qk~24 ̡8M+ l d;ݮ|R-wVؒxRzĂ/HzHʅҮnV< ރ)7LD@>K\TIt.06U:Wv]_,)=;?|nuZ_FT>ےNKgxOwZO86cQn|ۦeo?]UŌZY=>=8ḊhgOhFQ` ۟as*Bg`Zm놰Iw2l/QYbeIխ.%ֳ;yrs"\޽KuR \-+PJ{Pm0dƸ]D(1tCY,8(MV(_VMSRAiDHt>oL/GI9L"HZ,þ_.U4z*ܦK}DǢ^ e ?2Nqmi@ 98i=mi z4aXMEUM, G|h~DژF Ŕ@O*jiOpS$DW˩ $ fR-]&J!@Sׂ:v (wmBz"&kszp-+Gx- UxoWheAO_Gw; ?XjϪ1qS4`ֵ)O]W8ZB"7)I7%C)GaљCezܑLRAobJKF<ׄc:CLkQouF٠ AԠ\) A P0a9m1O6Փ ۢw!DI$x7Gbm84 NE囮 ֌%.`1 \R⢾d*."!Ͻx)o(sֳk19E%ydV\X*X~3т%>WdnJr/Z&1%Z P㖓.1W>^/NZ2ryS[G)TiŚ!zDzߓU-rh̎2|jIS+X"Υք<"VWt891oFGyP ;eD& =ɺLڄBJ,ϷNw\zEӫr_sk%NRo% >ǰUv>e+nĸ::  |4p,9X|_`f]뇟3֗[9800#D 3nQO{&xi i<6zoݴJ2֗Ofw0ynBirhȘ>c,'e\i&m)m/CqD#1q%L11$jsd9 r F4gʊPSم.yta +ǠrydYNrC4gԘ΂ʕn]AVi 9#;Ё6A*l %W!JH+85a\hfaPΩGC  t6TƌhEֳk :qj̕pG]#WC1ϱ~籴XYG0f`~ 7bzJ׮LJoD1$DWo#o""C8}?_?WyX4LJ\+gBw&b*Ewf+~yN}`km/ݭf8| 0:Eve/"w۵d:ߗG~S(KZ&xt~y"~{p?ܳ'^-'~芏;*Kq@}Yӌϕ0yO^K1ܫSQGyN^0j1SrKTiȲ!dD3<7ѷWio0ŰE>zvӾƧFVo"NM2u@TTvĊE$\vtȼbIV{ D^AaJ[Z3RWw0 =Jj gٽ?=\e g zS= Hϓkt9Rp> wJ\!U%a<5h?;"WHm [D*;NDǐfa_9C_)!8+2Cq\?oRȪ^O'fv>dNj;~0* kõs5v|ޓɜ$WƞSQ>Ũj[竛BYs n3eT 꿳O yN"mcigO[,""Z9gq<ڨɋŔ<_n$M^ʁ|oy g*Zg @ Ec^@n p5BQti#?@ ~;rPٞo VwQqz)1RfFt;Y0¹)VnhMm. ׼[kmj]vňwv&=P.%[m6Q}1qxfS£ 7gGCXx(z:جF͆$}*u/$b C '`_7+EWYY3YG3Y|%(@UliGArENFKSz =YAJ<ᔤRmKCjq<{'P``pNi?BՌ:dWYNQZ3>>4\YLhfoјmF|4* gMX&ƽ?`(R`uޙ`dC&U4~P3) G@Q-P8`MEl Bbz&!SBb. нq}PH| (H 4魳#' g_ DA5ءxCQ cݺT0|{mL{ 둿m{c5/~7I>|sb?b͉7$u$ '6sB׊Zfy~ ifWqfRRp;Q9mI6Z>I(Pj18M~?> >O~;(n^AՅ> ~YdG M?.VoŚ-,4YOK3ѵ><#ymmw?OW^vB$ IN*}?ܓ>?\U_w4AS%4K D 1@!qvϘԂ3D= g*Ɍd!DTeh1T*0!0%rS@;"";JP$-/yEu^}Wt_+_Zsr=p#L[1%=ڃ!iSnΤD쯺P%_uw1 PFiO?hz<LT=`ٗ<ﬓ&3}SlCe'`/O9J4$?h\C?B(7֚ s-ͯTDrS.X95JJ~-_m ׫q,\c2VF G6{=UoW(nP gUS / |h!ϻ?XUzA;>&;5B% J|d4X+C@VڍM)6E))gㅆRQZ*9\Ĥq 8x|g#%k)ZЋ<|qoʓV'WeOtuN**$,9I($&VO.@ԯ I#Fxv2Lp+F2gx4L0N۫s.F hN9 ֘`]åyk_~KT$DRjM&ɷWQ8xű_GZa:BP}A%NK*q:z1q+Yzlz[M]Ѧ=:o˵V{`[ߞ&sQHسF{Y.GK%hw6LN2fxfiаnh?= G˟zѫ/8W=UAcFc#tͷoh!xVoY+B 8C$o`6mtvh{k倇P*29 O7n2 78BPT3g87}8'Xe=A:hfǣh-Vß؟*AuĴͣtwλ]݇݇ I$aR NdT9H:E(Qj$a:2m2lWuJ U?(W/F< ̳2y>ﺢL {p{?~wNE5HQZ}1WX7~q(Q)o~;lx3Tz V?O,bC{8ꁭr:Kb魻l\kC~p`XJF1{4E'yT5R*Cl$Q!dsMgNJRLZp%L)>!rp(>!kjXn \Bv8Ӛirvwa 7;vlw Bhz xYc]@ F6=9Rj%9wg+<7ynYAMYsknK{I/d3G.Tby W4Q5Y"[Bߑ'ƯN8Qvn{mnHٱp;͓߾ l>|*Yӡ`x[J-ֳk193[~Gy&IMMay\}u^rZ8[T'MPMn@)|)iڐ9ڿ~M$v q~>/r^ &RrQ^ZvBG7NZ4L@R_j^ey9[Wy046fF$\`Lbdť.ed,0kϟ&> <ھ|} 04V難S2nwߟlC=9 t\qN.R[)\JhxF>5<[LLdn WjLJ42 \9A*Cå1\ڵki[ K"Iܰ' 9{ѷV,_IkN.L,QPM`A~/nX/~[7XտZyAC  bwCU`r|m(=|Kw f' Aݬ^־YsS 9O{򉌮nV(h-Z6[۩-mxX^ Z`[dW_[KQx]F\e 0v`7w߻z4ec;g<`]8r٦lO Km=iݳEڬO[٨ZIi_o?ƭs˝e*-+_tK'j^Cͱ"}^{`[DŽ;-$8xW!rGxGYz;u:];8<cLJ_V{# P!n<%/6qxZ)@1,=J娪k[Ѩ%{iUZk[o2L2*PXeMxjגu}u3ⅶs@Qڑ#qM1Gȼi:Ŷkb]`sMcC_6r#@ݰ]zGFk4ZlXaUp;4칄*sg+Pn$P=ώ"lM 8JVI#vhݞœuyfʚRcc,&-+LoOԶ"'b%vTݣ_:ƛL׼?2vnXo]wEh$oOċ#[Ӗ&叞OaA'R)?RFdo6,/uZ6\tIof=²r\vN]9c 'o8F4 S&L!?SQ3We6̳b'+Cp6^>if#[:;G ͸fX+{p:Whrֺ]N>۴gqԪhdW@A0'[ی3 b>b9,VV\Pu#-UF82Fs2l}2*sfHY8`Zb]*9 DNCl~{6yٌFg@gqܞ^ w:jsf_=j)X^{?g&`c2leLHkE|!pAYxp=BLjٴV)L丗RUN2jC?9ks [n&s"[12qpO@0ݖ+8L2ݭ"@UFB2sBWT2:Uˀ9#ZuȈIGR$ 7\ Bkݪ'F!uyǶv[lqLR!vCm&r#K5 AB;qaB,$􃩚zmڔQ0!Zd^f1>hf hӝy; ! D fo]5sD4KTyUvuD˺zr!`bf‹:#mJ1hk AZ:<<"M?AuZ|xS,55j.i).oBvj-.ya3Br1ŤWJi/Pi k_/:4[[A֦(k)z:AhyR7Cz %5.KldY4.kb-m5F+:̺|eN:2GAj"Qh+ LFdÜJ)t`9{p`"8mdT!W3G\Neg {9~pSÐcixI 78'P?|P ~E|F!~ +y5>L`x53˟8h̄onWS^M~ 457y?wq/P X&-o~Yws=ul8.j,´X|ZgscUD@rO6۽zOn֠{1!b!gǣ4~->.B $s!P)qoC.=s}#-HפX{OjJI6FvLݴV3AB_1abʩe+Lo\=+Us?YyoFUQYyCb^Ǒ]c{?r&9]@ܡZ^я?!FQWּe>ZM5ߑX'abOLTapq}jBDaպdhXq9K8Aiexɔ`_Au 1H~݃ W5VA#F an?L]u_OtSfppG.WPq{͈0JeJ ʭΓNMIX{(ٌ4Jw̄Ԑ heV7`F{BMn9g\pc)aIJ1)|d,U@$W9B%bB霳ȱNUDs4B&N2@,XAF.()Dp22CTo$ #[xAP.xo1d+DXe*﹫%402Q! `̹N{ORwC!jiAL39ζ@435qTUfmv苤pdMςsuT,m <+Nc6]P;- 4_5-dIkxz>,vAvÛ|> T[)Syް]=K=ތ@oH=Hcs kJ/_~٧A8[2?PiG+ kF{5qHvQÞ,'vv7Y$K3k{rO&J-+^>9DIz=?HKui{2(5?]|s7C2:#\$&ͯﻫu=r]: ˦s,=^ pۍ]ڼݺګ@ ?tsFޮAVkZ E?Ay WGyGKGiJVPE5$F9 Jk4)$цҀiXcFQR6J$*0UD i~7 & x+k D1,%g!Ĝ" f ]]vVbX LΥyA.=\Zbh ^"jZnQ-4UIX  ݶc\'cj`9MپVD*qLȖ%5J,I^8Fp \혔Tl }NČ>_,jk}dyf lkldj9kW iLq2hݹK;ꔳcPbBG8#7IHHHر v3m1Lt`o%@Kgy?#Fi!ao)漜7u_q\K<4>mwm_1=}?OM#Nzprtt"K*)9uwdMPԑry0dڸU!Hu=jYu/ݏ_ b x9 vypyI1%{f^?1wטp(V鳏፤F_Z|J¡ ¡Α?Ď,нIDƆf7 –HqJ %a\0', sZ,eSZ36,tzP5Svi nZ,o+ D$ϝ)Oab%0qC$Jaj0D)R  cW-S-A5S(tp fe^O\%^]\ q&%]WJğ&*]ꗋT3,\AY2:xg5i%!|&qv5ۚYj@D?0~67Mj.<6G? #Mu,S{Eg%`{g̀~tԭ$ma%Ǩ"Ë19huT8jkWlMhS"=WO{7օiu% 32Q YmPm#EEh# %vxM9Dsy6bYf(:|-)%! E0NHQ01|&bVLPj3(F0*1\5\k`xR$|)t>,=&BD%5S'Cw?n%1G/@ci3s ̩ p gH|Q8%2, Xw"7W(EFdgiWUiWPSV]5(vM2Z2VPK08Tڕ):>bU0y<@w}5(1xANL+-YqJkRv<⸢S%/ \pstD- Pmڽ2˝]炔 )T1ũ~H5#j'kyv;LG톡AU>H}SEJj:5}BO,K~'KAp2_ S*G0b "V˒=='VY%>`V!_BKAe1Yon2yB?̬Qܽ^FcTԘ Gg A!u;#eHZJ )DG?]ĄB1? ղU]Nꕫh3wTjZK-}3}ڡ k^+EEQ5Ė KT۹BPv5A}ÂRcRSz8BI䬦HNLۀ& I(TB1YF  ~q PTM`~FӜԨ)/,)lےbe<:JU:T]nX.Rr[ ,9p`c~۪#Wr}[2)sE:z}NvG|6tz ]| ᾞ"k[Z3 &2BN?`">{qlȱGAqqatM2ͥ}8L(%dq‰$חdPJʹ:նJVz( υ(sD” ֵ aۚvsrr%Ơ1UOU WXzVb7ţxfbf}2:wP6rh58`B:XzEۃV7W6(qw]FubJE>{UU_'rM2j6xؚ'<7C+r:NYsp5(9!>ANelML^^ C9 Cf1]sK2;{Zt60mxDÎY)fzFrɬ˽|>cZ]v5 e5}oqjV=}.Nou c`ma +fόɻWpJ(U\\Jٿ[4(EsyI#8BrUmMiS'Q>BuIes2ʑCG3mBAEFDԱP@pLFOGZr!Cьk4 %,`i0#'xOjC PZJEb)t9$1qjlݕn?Y.߮qrp{G3r}Vk$+p61#m%V0wJf7hv^zdzpeNȕMNAFUy>ǭ}}QqQ$ݛ-H auiIW8ȕ )RcT xϝlozbJV~Ø\)(е{:ܱE=g'&`W<^c >H?_Yn_1i?"l~!+0)uKj^@*?F7m΂HW \+D#xԞ.U ^%}EFڟN:Mc}SC/ΛW&;\ƂGwv m1Rm%P{UC'Ks'w¤H 2pBm]ޭ8J[|?qАJ8*1!vD t1B;+wpo#6L!m, XϬ)Nc=LL3V50IN|1$?=k5dBm  Xd%/N"`?k ovj #b,cy_ͣ;l~A0nǯNxlRD 0{ 2},t58rj[̕HtF:9X঄>k# >O(t>gHb'Wav) UJ WZJ_ A->2_Ĭ퀓tLisa!vA"Bix R`vA!դf8m!Cc"oNoDi$@dfT7wҶV>iQ?Y sZi5 78B2W6ع`(W"Cd>T!Iw7V+A.C:1u.w1’hYJmJJcӖNsڴE۶Sqzeod]1̷l&\#-ȀBgKI5D(Ag!dG;?~⣻g C4ޡJ n!w;􉓺q1.n0>Ri.`B V3Ri<@ïfKrQ'_bƿrk@NJd[F4?S;Tc_}8_Ubo]pևȚ|}}1~6Ns![BPO۝#D9{/G.RP$rV?cёZNFiIH 0FTc\` X$8V"՜+/b]{n͍ˎ%tӏW%^C? bϞu,t}Z,>i_iߍ\2:uQg0Yؠk乛v,Wƻ؅OqGa&X$*p:s Hx8Lpyyftb<뾝J1|sQ.I|U5кoo?y7yA |э;h>H- _:5!Bx0AUN OKˎ$駷q7vM{=Ho1,_3["L"7}}N2Q&|wFصFdloz02x?A$w)/ M:no_C`[p{fM~2qE扛]41k^2G(QR' Rpg h &fN ?u/).@&Cw{1%Õ˿\RK.-T1@W:z }=+_MCTƝwWܨåO~^#t%ٗTluZ#!G^<~KE Hk_>f>۫rwG]{zگۅw1VV3wz2%[?~msm紛G 1b;~A1gb_7\>usw4|pۃW}MeCLo_54m,fnכ&{e_Atr ?@O%t >2Meb5U<0l:o$O 0!\/v}i]`Ndw yX?uahV yu~DJO{șI-K_ded6tucFo5 e4ʺgzH_!ޤ_ !k;A:g%UUf(!x9KlJ(;` _s+c2`}:z=zkӖ&m C\sg0KAU4ExP?q8w2mVty\)d Wg?B1{ ɲ}=u"5G?~(vj{oUG&B bW8~67Oz(=\%}n?];c?z"r|I/' >7vDr͌wT:ra5g;\ 1 g1f3$kтjMؙ_#L{&v0|AtQ+XS(Z2AZ Gq֐?5S0G>GP½zrq ǚdNX:L[Ӌ>yzq<sa-R@<1a.`ƥ!;Ep&s̻Rԛ+8iy2٧>MԠty??\y zAGGe/p؞NFtݨ2.mX ޝfn@2h,?=%u(G!CX@3- DΘsDP[=͈p,#s3Wyȣ$(cVZ\ZAc CBSNx )vT]];#1U[UA@CzH +X&'q RGȌ!2cLd\c47}o$BV$uUFbWG33_\|Xs26z3tistgb dg)JPvY/m^O-Z-S8pwWIUeZ&2O1?> .6'E湮f܌ܞwVǹRA̜u.Q`!) `pG 9xQڂOoXwo,-/?jܮ>ܞ]=wxVj$*b~QNĚԉr~^ĢX%HcXݓkJ~}i@-?G޽]~APA=O.{XctEDdgGC`XvFRذUwQr `\bxܯK2{c,,h_~1߿G' W?}|557eD H+ wqyAG3[ݜڻlU{3,Cx&i7yś1MK,:؏9J"'=,g?%Ho @$X^K,kM1fU~n6:(xY.BF !2Stgwf} }CHJ%S˱deV)20'M l;ordgkJSi'u:x܅ i \zX-sb}Jp \Q!+IMhn<˨feKVwYeȂ *';q f&汨 nC*yΝ[k3Q\=IAv%p5Z`Cd]*H#QDYႥ)oD Aӈ6޸As "1H"&R Ri5;H+Lݺ{2JkN{I-{ NiyN-RF Nrƙ AE=Ju!fWg6g}6Ȋkwr};e DO*tnўfVfG_[HiTb|"`#X_JV bm NJ:f`qQȢ^XBۙjڃPgJ8!& #" 䄄!F8`o$EFHet0ha5A z-)fB#:ԍw5u75g4hU1@);Ecރ`6 N{l5p.icd r`gZ8#;QWGĵ hˇE:KcPM3ᥬ0#؄Q \c\1p q{vRQ6+ Iψ& e=eă#koHrآ p&A G%>Y݀4 {mHj}_ K7DsHyT~\7F G`2PC`p mHb-t_;cP"YF28{kPGAkLhY-3y4T0p/bzG|UN1)% '!Ac?=4n((aoz7\!Jy7t oMc3M Ċe`Ijir[0)ӏ_R< &W/H~Xz')WrЍwSEqW7㦃b wS#DQތ@L税V-itԝZ= Dj_^pS޾~`+f0"3ˉN 1*JV '17;/W4|CgS-Gu%.16ӃLe !XU]ԅXlXrדTvJ ڠr n6wQ6>MVղFT;wTZOzMu\N6J!Z10"hR箞z玅Ua)cxHً9:69%\kP t^)bCXD$SYnF8Ҭ!J Y['%4zꨟFtR_03c#VT\Gl̽V_NQk*^TNCn2mVi nDs% 8g a /h0r/P0AJ(]&do3U7YI)߫LRMpj#hyMNMBsVk#'6fiZ=Y3:>򌳆\ܫEg_Kޮ9㌟jn}ZΧNcS% {ȩ "+VC<𐤋 >!7NdfVk:WkWP%0{NGGj˴~cTh+Ium=.unV*)r-\^U`)ΊF(l#+ߩz;\>b tHc<6M[UXƚ*8"%Ý q'|*Dy_OFWB4XQםƯF+}F[ ؍Nw*9BE '❟H3hFA%@7۳# .  iEeIN5&\ucV% }o$e|pAZG ՞zﱖsHK׀QNtS @c}I3v7 199av푣.ܻa #YžHǣ`W; a]C*:8DT>sXe"*}"}[ -Z?m:S?}FIy|/?3Xk'$Ex_hCM}r-|-c.汉XwCRT#R2g"'}Rq7΢x6f3xFi&<cz0־#Qu!8)}VXg0.l<FrT`_ɬ٘ʭ=j U[K~<,!eݝYʣGo>MU:Ļa>إK/;}l*mvQ{!Sj|ohO~E•P,RdZ & H~RT:K^@ZJm@hN]&w6[-*ozZ]Px]"UhȬj\{P?H>{erOm,ʶ]\U|]ކJ.ETVDSJ]؍ [OY1^X߅/}O?,C͒\ 86YfbcMrxY W,EYOB&`Tár0 'Ӱ]rj4<["!}Bc52pj$kpMАiV?Q+9_ĕN_ȯ?n$<B3Y0KBSU-#7_ͥ5k`Ձ[S]MzB7Hn'S羁0or6(J7*^~g}ߊdɟ԰h*1_A*3;Fk-IV'!-&6`0&(2"tT08-X(uZM`He?LbNz hk)sl"}N쓕\ iVm~|aI>f! Q1H=,n>9#9>3VF?~K>򌳅#ԀӔ)oZ\Hʋ'7X [=ǓWdt5l*g9qMҲt-KH#w1/F^uH;D2='&GjBRwR3ƮyXM=5jIFx΢tRXWoY0Po2b3L:% Lʙ$7;t+XG7AJu(vSqy0fM0M(ګ G'mظ9%̯wj{4j {&č\Aӎ[ZUIg`!ٻ6r$Wzꎙp<ǻ3᎞7O8pZKLRvoDPEJ-wG` 2d/Z RVA*DO>*g0`Sƫe'4-;Wh/VHs"_VzԹt(jNJqP"ӖW0t{X(HpTtѹV%l ?^4oBof%-fX+PK8.嬪k3k!,)0.Dw Z0M @JAq?>0ƴP WHEp2钌]%-Ѣ0#!Hϱ6m:؂S FK˴1 A6mMj]9JLo_t^PX& L*h~f^{QuO)LCے̮|crN-|[8"hl5p7Z,mx9-}}LkѱiyqXJpDdO ҔZԂW: -Y!4>*(JF .Xy)%UVZyE6Ɣa,j6 8e0viJH󚊵4?aȮ݅ExVXYK9G0|h&T9ZB].wj~ _gI(Fk̔ZOS?9-)2kjeĜ&hר\X-OVK^yA%H%:ﺩ+r|>7#FenonA_A_bKf?n! #8m]/gGzhOp+O$_\>G|]d6wf5n,v\v3ck\7c&T MTL QD';>^7Y%ϐe7Ɉ'0.8.jDtEAEE+K5QmW{qʐkOZ㔱eHJ6oBEeEAdjB>%.z[}~pe^|p 1Iص_->wSŌ DzzuPg5%úW6H\Xo0Lb\2zvH/c:hMT$Vx:փ5%L`tspGqmcNYA-HTiBΨjOO`Bstڬ ""[˨׳kQ0Nɪ|`V4F K:Qz%>xJC eh]OFx5i+g<" œV`nxJb>:Gq Шܡ5 L@V6LC9硋Լw.Gdkv|Rk<$b\+EOLn${[LVnԁ ȎH|g'MhCv(z q5_j V9pǨByt^<YЅ{r72=cFoZX}fZH!Ɇh F,Td`BW8CGa# :P|fwӔ}Bρ)S BMu6$KC wv&4?HƞwRGY%a6],jg AȄ0=K\cN3Qc$+G;EϞl= ];0R;miW?ٿGb(e1{z`{:{ +NN{To~Kh5J 7]0f&]}w OG?arx-{B2u@$77>zMSu̔)L/o՗0+r3ϊFa)%b#&"934%N mQJI"Y{+U-oZ RZ Hz2cc\~25|MIb.c㍻h-ofת<\x_ob򓢯ǥ"ZLY| KcJ*{`:cCcEsi`8㢥L»((#maƨ5`R^$XrVȩh='`$G86X OmTj%IET!^1\ yeC*M%l?yz0nPdϲ2v8;G8E"ɭV=W 3-OdPC&5wuOr fX AQJZ{jZzH`ڌk뱃kmPMq̔z&a}ǵ&ˣLvIޭ~Uj*;J(i\'ʧ}eH~btcf\ooܱ Ri-Aa6Sv4IMy=1N3苙w.ըR*;L)ăZَD) %ݶVy }S\Y7;陿7!,;L565;MMEg?Zn&L_[\"-j0ŀIPB}1h q:` : q/-ƀrdHv[jE$_0E[-F3<3>]Ohqlͼ_X촾nRKgEfjfWI?:Ř w1A U3w5p7Z,UH^^LVD"51L W(e!UVId.ʾߜۛTza ׊6ݾ Ml YӸb :BJ^[)nȪw{|{ёiUU.ϼTup:*7G-y9 d^rAʇ|yp tbnʔrp~f !P7׮#iܒrvEFͬH6ڸyn0 F ıbWs'W?{Wȑ\ _ɺ hpB~bj8%nwV`"$F iH4^֑U^N1;m:B(~!>??G{8q7aڣbx m=C0WH˝SwDCNsXm=~7,)UKigfﶔ5\P,td;'8e3p"G~s/SƐ!K72[͸!"]QO%x<@G1[1s=zUוKEcG?h7&fߑug݃;lxʋs$Zv2sxL23w_6%ZUH!uBte}9͹67'ͭG0lnг 1#Úru{% uI \-NkhK ]{ukC%8H{IbXpnۮOgLd1KSGt99t2(Rz'xa"E D %頕WVPC<|x4xGG@=$d@L+bt$-<Ƒs)|2qeFMtS1 _(.IK` ,=NG%|) ꭉ}:j%1knjh7'1M@_N`Q\k2Y =Fe/ERPm!`!Iy14"͠FM\PLRޞÚU?lEBU@=eAUufn&UdS&*ݔ?+ 9CWn~MkqE1'+#S&!2vH ڴIh$u,DHBg a)ET-}%U)zP`z%.c[шhj1>:+mI UUƙ-N qPwV\!l#1{$\0I[+Rar2i'dzo6ag7~,mTF?5`?_ߌVZ/1$Ń.QиB˂Ɉ aG Wqn%<;/Eoc~*A1|G0+utkY_,̲O3!pdY'?,vfng S|W_:m"s!%;q.#'sоD]"Pw1B‰Vk!9ؼ3sthd/4І pͻ"!4y0@/.^cܡ|asukM3uFD#Uzd) >:ьuy\àD'"Rljն+^i&[W:A[`ZUI;cߧWy)p_cbl&ָp5 s'◿~|dxS_౦&֝Ip3M)}!LngćIff'e·ߌS;h<ڋ7!/$(_JU,Rq3tIK-`wm9ҏIs S_7kW=Ifa{_5uֹEʡsTXabcMJl{| w ̊Cyas-yL}su_RP Je3s~ / % hj0.z?ȚXm0'4cԢ[Mt y}q\ثEWY߸A3Ű)Kq =r 2< K簾1h4tV g{i6, #{q:L?t9G9Ȧjb!ԭ1.6eܜ|y3;^s=޳$!J65;<~l4FSopPR kvBE\Q(lE R `clL"(.[u¿Z+YeVK f)yo* $ݛ4/x&jE7J"+Gؤ?XL7>v{zWNk{K uXNZ24IZ>oT.e j!YZ\G'_أJ_ &>r-:déV xذj2MG,r,&yTGoӉ H&_XrU6%]\U[2jM| +u ,xôí#8e#a]lS&шCSbQcu~Qg;_VXMR/]%9"{vIn;SF|F] \&Wt>.1Aƌb)[]/ 44YjW6;8jRafaXAd1t4Nb,k  0)\)bi(XxnVf|_ ٣dYY?q|C~zlͿδA2!- 7(?];p1\5 uKeHK5}#PgڻpjXw>c{@](O&ۮ}!:C5o4_&m@ɤԃѐ~k %U0}ILKrXKxVz^W!Lhֱym@[%)<|LL-DJ.YM=0K0DI/sJε JYG60|zDDlpꠥ'vٷF$c 3p;g`A[ zarz%o Ņhv޵߮.J15pKg[TÜ^ID ̭pxڏN<_WId dRo^R'%?s=:w5*`f}_Ժ,ltaʢh-XPB͝'R =:]KGT0'E8(E B8:j5iBtT6At3}IdDc6KۊQFT!,vq` 9yTDhrޓi"k?f;hVM'>5{ {Xn伣mA=ȵO;zx>?[$Lv)?3?˂w!* {Ɵ;Vo4Ҕ D. ޭAb:la<5; Z޹N J!o`z iik:uܣ1QK%M'm.?AbF!-a=1k8Ml/q6BBh`l3[htmFRc-:eM\֪`Q ̉YG(eJdZPw(pLmjJ¯P ue;=o骹ʋz=z\9$%?`;SLhCණUU痜((贎%F4!iEt$D$0,:'bd@> bt0MI"~40v[M xuƢBZx 7j,k]ms7+,}v*[/I ƒ-KZII#1iRgO7|lŕLQ$iUh#Bh3cm Si)wl`P㰴^>Z)dP"sPJ\> B0Vpбᅯ!9!iAwiF@ ޯȅӶ -;?YѶev/j iơbM1Cp`)(dG1Z/1 `@I>e|u#i2t͞Ίȗ]J^RZCèvpD'0R/PՆ0.t}-mw%9 N`~h+j8=Bů{聁NKދV}8:PV~ŽцoBenvt9^==z=!fY=AeZ+Ʈ!ؠt ^3n 0VǚeuS.=O@ mO[;P/Sf>c}$1I;6N#Opbw6c7i1M,`z-,6v#DPjM܏u5kq6XH4<( IicgLMp*z:JW犨W+ENT- KQbY)l0!%PFB*m:5UZz,j7h!ZaDi2E3==teCaS{.߽JpW4{}aF{Q\M0ck6g3Z87osf|ҔpWYCn-`֦vQ^ŘRu^*W%rcUJ#yaXh R]K_ot})wdީ[-BCUHiOgHP~^L~4g ǪJ7U5pGj/o<Fx54糾~gRFYٝR 1V 3aDk#a82XL`+~3!y 1hcxMERV-UsW9I.FXP1h&tK+%s"GA*R \*Y' QepBkbA%cȗzAZ:*$$VLH(/Q+EKz4A25׽`]AL{Af:n ;PU@Nx t+k[U^*H@Պh4/n׉9c=WWf GqB܊c̑Jaӫ)b q^Q,( D"e>wZYoF2c ϷVj~et.Iq7z;;TN躮ofem|:Ú-RAK`Fh@_$&7ZH c[d ב9H f}p>{.EH-EhHZh9CCK|wK-OcDdJ 5bA5ռ _CW2$o=A p/گAVH7+h4tc sNA%?>ldU7[l6>(zgepJ"mdz:ƹ/ ai&UT4I(qV%##Ay Q(BkɩȀk@`2:!MoŠd!hZcX%-@qÑKOB[{̅J5x("jJ|߄t쵩qj eEKi-5JԑB*9?=YBT<9e] BxvФ4e^yA\+{͐VhG1"rSGcQBja$Y\#L`iucBnS>WX3K4`YDCH):,No=Q6Pa_,yK^&)32 ]KIeObC m6,݋C rd\C&*,q ]3pؘqb WLsͮ>Q008mm^4M7z=} j_i{5fX< ATps`g.@ -VU$_Y-YwJ\`Le@WZAKjJs[_1h#5+Хh.[Q&({悽!X`dlxIj 0 xj[QN}!=fNk'P=A\ Mĥ^ߴwzMJ~O JVRzI9 : U`l4 (cpǻRAs6ʞ_RPm^Y"sݣ ZvOkdM3qjy67zl"2_^  u*6|sQW AnW+09ܗ1 5/5vL%YVF#!D&]kZfRJ^FcĔo;GL9#b}HO{q Z%*pW6>=,̫DXf= ӯU=84r׿zzbpm=}zbx=ɮx_<=׋U{*=P u0)p*봕K ḯ^fuZ[a`q0B$kXmM^GoE [= U'8T1f0 ӲIsP0LIm̚D5ƎQN4;LkK2Q6\g]@zY-N)Zne/.YM? 4nV,͔/k= ,9W" @DYX2. Jq4~M+R-|ïPs }>IORb؂z|?wוI[5xC,X9=\'9}/-.Y ABYHh! h(7+&|7eO JVj}e(AҺ2W3d+JLPYQьDi*YA#k,Ns:^FdI!zUT2bDcB8RAUJrL k\պ= ;eym)Q̕h( D*!ӿ}p)8w^ĎIv&ͻK- =\nDTD+ UJ(1,Jϛ:k[֗vy^؏ |^;%B~Mxf̝:3-}]׎"~ԍgW)䎷Ҳ4JhgҎ%g+>DZd"-AX yc3tՍߊF`sW#+hڗF ^u=(jtFOqoh控N讛 a% wz=Al0ʣ*[XX:k:lQjsS&Di #b* f񇅎:5pYvr3>ɾ|p4sfo.ߤ{Y|on3xTVw<WחwqQCao+7e|ǤhVM\vmHi+GR>9ˆ'[a7 Η_|o1Vi!@BJ +)lPUό{[w·6w ᚉCU%V=4X{Z N9C} a+ޤ5O*{#J*p8ŕ$cR}Xtg x (/*kt-\L<ãChZn:KdɵRRYLYg(eKKZO10)щ8[Osy No{bm?v&|4 #οF/7'e!'m8r ZNXSj`34Rz~0") 8C?+NRq)@FQz q˕>hR `Q\JIM:J^pvjMiE #.:6 ҏC9g"aY)IH, DEnNeuP^9u؋Hs% %GZm<Ln b Sk6MޓH;e"X,fv[B`fi`R%rRb&iSe l GHDXp Dc q&`Z@C6DTԿS>i`X 03^ch5p<`61 u4H8wbttVɔɳB Ra!Dϑ(0`P:)1EOiJW 8D'zl*,A BE0< ṎBx5GBI%2j)Vb' c 0ĩG  `z \;` k-8^%iu+FOʃ}['5Aa!;س[,v )q]D&v$PٹgմN'OǛenfoJJ. )QM) "I,p*d?QZn0Obs{jM%BH n%{ l0jhsQZHN  F4`5{Tb% UJ ^w<YR 6=5_&Xq K|>Hx:Serqn>d;7f2M/˵i㏋A:&` WC}#@RF~)8L 7TܧR.~=|A+iUIhKbj2翶Uerz!~νWӨ ӯ__J^P3w:O,;X8 L!2gbJe ƻZ 7:@9Neu9~wHܠtH;C4Lpu3:E` W0W/כ+maY,&,|̻߯|⫚%m#+#3#BνGWΨG[I$k-w6=xZP;%r~ϫofɮ!_˼W6QN qt[6GW/7q >v9̊ r2E3NAhtJ.*,(V^ gA0\%Ek |gՔqMRV>jv.iauod+ٷ?`c4a H!W5jGf&&*)uZUۧJp1\?{׶KL^S 7\`'aAh0A0+l Qh *`W&4"r~$pTנR0i G.ԇG]/3@6.$;g p:Lr;m!&t 96Wov;0ȣjFΪN]=1 ZH%2h.p!L6 XwH3v;62@ 8U?LĬxJEpl·WV*#MY'VDqVMy=P|ĶHڊ(euX;pJn!JT.mw-iۼMm) &E9uAI*Wĕg@8F (5 .Xx`7RAF)(TpbG$k^޼0]Œ (9Q'pG?\ ؉rxo!p6K:X#-î9ڼ+܉VGV?UQvi5hp xVE `ȦWK 8XQsK=gAHxHH|ttTL_Fd#f5]݈_؈ftn'clf1!n92xdξ_jG׷߿?޿y ֍+2x&e`!X:x\7F?qɊC'O Cw-R 2i)g?1~:}N4a>.QxCEǞUU}~4Vg`61#HD"% XrW*-k x`|DZ#lq!ִΓ_K;ZJ G8J9Sx%q+=nO|-JEh)|@Rf2P KS6^dzg|nx5䳋:)ds5Sjpe6sМ^SP1tnR1>"P3<&_#t򲼃E6$/.E2Uy/ֵ#cnN3hýL<]kj6$/.˔LX n63( NQi;RN@q|aіڎeoY<ŝZ4:gIo 9;Fh"6v ,~Mk/'.ښ&@-k(m4(0:\}Ley|*cOQ?4˞Ŕ'YXÜ>ilcH֦0Y{Tq붳8V}<(ػ35OC\8>QD[dOURǪfc8Stڼqk`o#M"/b+RB1+-wv"<{]Lvʳ˞? !:csd7f;ljynwafskgೀǘ#2@<ܟM:)F|6f\I)^AՎYo&}6"9烧қ]#?rΎ_16:Y4Dl&#{LH؜^4_ᅗ9KMRh*3DE 2-!:PSl5s~>m()lYW=q&s${o~HB0-9ڈO:9 r%:v}ĕ] 2ro EbG'{2799or~^y3(#:z Yh̶ܡ| Zi+ؓe`=V*@CE2P"X© [VV)$p+)wF:[̍ ZD3*lkmsdO7^Vk0UWcǰ@T;R;05!{4|6`a OFE%IQ>P~_<]aMv['ҚbDY,k{+bB©G s 0pFb$BB1ፖDH9JyJG*}N-&`?y@"NÚgErQd]^ȹdGB>J{@= ¯q0l򌋏8d($a2_d Wd2L9W^{T9/L~ڟ|vWDy[CL)J5 .:kW,s.iRr ;FL~_XGX%fȕjS[17QdD$L p pKbZOl<',M<#O# 5^xxs9ǵ tci鷱'UN[ Bja:h w%.nnK*r4$59pI 3I+\tp&gr>ۺt$[дQm -{k@"AW\x+tѣ/*Lj+Γw pQSAH%O. mvKC61ب0NnbuEhۍT_} Rɻ(S1(KK!qc5qcxRJjbcCI sEjhe0G(ޮceH!]lbG<􇗷=ƃ"h×=\wB$?!my1_X$]d"1\;J`esac XɾPUҷrIl@ßRX sYVZ”rou(ɍbk$s^朰%FXi8Z*Uܭ͜@RO2yOcq x*A7bqL%pXpbE!i .S7b :tCi _Y16 vˍA|ņy;ab `uucMF"q+rW90Je/<‡H!)qR-m`^#dtl[.EaHE Ŭ`;3gT}Rf%oR\r8rc辸71BAi@>>ރL geSD%O5lp5ZO.mҿ@Rg%kx=lyi$QSU06$(6qZ{T*1+\O\ehrr *vy 0m=Am"x[ಚTr _ ]HT\R[M Tu$2HnW2H@b ^rE+A =+3-+s-c |JW9nVT}lKDPQcg.[PY#oJ 7(DVOHP9?,{;wB}z ܍fx}43;;a=hwg[Z#ͯ8[l֍ -S +Oa|ʟ}ҋÜR2-G=) p!Ɉtua^v{``UXFV{\"/TiAB$Tn+g!<.o갆:Qnci!AzEU~~<y2Q-'(0oOkT[*ԷT*}"MkĂFoyA,![ DxمBrZ.*bjޖ-*rHv!c.GP (KcWV]j{mѯv9jμ L Ek^rd!5.+PYπv\w~'I:V{[\Օb"6xl3n^yޮ[Ս~n FVs7 $^?޿ʶ9(6zck)Z3+-19 QjI1|\R#N%0<Pj\*~6uAe: ^[T t}Qr#{\-u)"j"و(53NE:27GʘRWZ Ivƌg|RTfŋL_2u"+A6>KP5B:n60ƑDcĤ#AtŒz,rB FDUin^ܳ_m01E"F ra@ClRQPc">$D Ě8#,藑+[iQ;ϣ>Ji*gی.6[{^t>R|7!%ߟM8{0~xP'd*Mi .}J]3 ;|>ʴz=t90$W=t]ﶗ^=< 4Xp=ILރ Fآ$Ǡ?yGh/4`4C{uVSx=r௯d Lڥ {K[-G$azKn8:L^N/MGܸ7 ?Ƿ~<^Y?ڍ@} Np.|׋`_/}"o_'OK9`I+L{Yw:|KhtkRܻun E{4 {?n﯎በ:v~j9HAAB);e" C=GMJ7&û>wWf'kIWT-rSr}VcaIe/&& Τǃ F-Op x 0dd;0 1{p@8--+#/63J MChtly/F0;'|Zӗ 0a cI ^Si,"RQ6* *)0cwx,Gx!MDz+E>ȢT8×kP"çVf*f;d,⭸`.H"b\R Q &M`-eHe`x#s:VP{0*P7ɍB=pU ^_/\80Q4~*09jJVFunuJU%b% J*ZZ^%h&0v? VZHF FoBP00˘+1f~jr˚bᲦ&m HEʭWTH2 (#8zU iBrsVxwW;ؒwsç+ZOB(< ֔ ʷm'1յoyHpN #YOV>"y%G%UD`NMVq\P%)EGpM =ƙ'VRI +baB[0\ $:ۨriz+^mqCC=$F?gPW\^qJT|&Ά$X;XSS ~R<"]M1 ~ۏzI>%o]~`M?z=8iAr "{Kv4c%ݓ+IL˘<>9=Ϧ?߁2;~<;==?;:7LFmg?}':׋_~}?w{g `cK|8>rڻ ncߏw%g@rɡA猛|ʸAlZVn}?\fȌՓ@0z?w׽}h >d3wf?;}2hJ,XDɴr̓dr^v:)~}7u~:n7f`;yp-1ah<{STz?7O~r<-4Mۗ#F෷?tps>^z7w;.ГɃoaTipn0| d Rӡ£~-')g>D}wyZ];hwwG !Hdu+qFE/3X*da`膻(t׸RX}:RRJ"Vu:R 2A_G?`O }n%}1hV+4Vn1hk[|{yg-[(r?_Y,:t쫕N6m[FŤ 3:2oKw4&7~E>(9qWwLN8|ӢqZ{ȗE@Y>,-.[pQ+*[,  ؠ98r-5&0SpPm>[>*=դCgYPǒ5 SQӟ R3 :^0yǘ!WDzjRMnoZʻgMLg8C׉(;fh^k􅦷zycQbC (glwi--egtFw~|I"@fmCCU|OϾ-:)'g!qhkeBjkDZzmiv<܅yJmqtnrBg4..S>k A^ֿA[6<ٽss v6Jɮ5Zc c]eRRv5h:* m3udZgi" A:[P8Z4SDrw\nvn*da .Gu[?3/Y[hNCtf\InF[PXh(0Ut 2)SRAdJQJCh &N&5RQKbkWFvyTe(%}.{{{]I6s@ 0B(6U`yenV/Y6y(;////v͍: 0-] M[a MSΧj]7|9<>߭zk EP&9P9T(gTHNd< @rTkBӘ<^te7{w'c}%*00KԗZ*{ܨ~?}IQuez-ұh\oq1xt? ms K+>ޮ}G-OO?4 YkCbIH =!\ ֎TpѹbJ{TL[UWX~rգakJpT{}KAzKQޏd})'| ӓnT7=LXuTcpj!.QuSЎhN{JUQ߄I~Z}7a:ʚX"fpA+R0c~hR n~*xqi$( N.CTJii;3N) ^W/JxR/+.WpEp$8?<'eyW>v˳U `^8p}k `\w~o_d-Ҝ`He_o)2^xiʥ~vl0|xpFu!|+|7K+h:6:hȽ^Zȇ2ڕ;D\ ]t`6 +HjZFp8ua]Qn)l@-1`{Q1І _+ m `bŕ K cBtLF,FaJgtxVP6#Z=nŒPA2  !3BkDC!I2ŭђrAIS-TJҔ(SZ[T$ƨ"MMI鉖Mj, us|=0xM3"yL3fTDNl8mBgIngs4.~8?dƨ}a,l/.Ж.M7Xՙ5jzb3b1~.[LWgM4`%}51J(OԆ2i,ÆRƑҌeLT1\Md"A[˜zbfri$ԎОh2%p&N -DVm%@ZIl !d$B%37Bgv-D fhocKg2BTeDrB+kQd2U|!dǒL% *NoXMaW ֠NJJLET,ډQb¹B38Qqiĭ2!vg",tŢw ;b|֏]i*8٨BrYwW݅qSxz>G#2W/?_C֬~f3QYioq0_9rNˡ]ޞMǧb"(nT}(Z a훿E%3-TFM4vA= .*N{0f#. Lö\ǪCq ;q:-&' 5'{7@MEۨm<}s&g(NAba&*W(BL뺨ղfCDpz b4R^-$cƮmѱypf\vXlտMd>dצX7@G9kdf=Zz)zFKs-4 v1'4aZ,-Ӂ FTfF k2zZ؛ҐhȈ%@L"u%ib#.VI $MS.MBČ6)P7K,^f*l8tF];; = :JH% 1$ {A͔\(^ΘqhT' GܑS`&a; f oe'=K1/eg.KE7˄A)?!-bdB\x aZ8O˼ Ok-Gyt4MsyLj3QѼֽf5Bh1c8o+=Jy[mcr裄\.[rΉ1a0hEpoDÁ2bdڪAݟZ'Җ`DwX@`-{jtrM!+/pM q"E4oX-FKaN۾FHgV섶E)fd@sWNCg0%qqb̯AD^nvt C%=fLƮ]GD*dHǥZD}*]m?r9\0@X G 3L0&hR+IMgS֗]RfOMSLjK-$"A?̨!b3!mˤI V 䂍5WE (*hPaw S-hQ9muB]lxuUS * p+TP6s2P'B shĮin3tӕNWn:]Ӣ\i*tle$XY(4$2V2!2)3EV4䟮?s/ȟwyt.&s 6ʿwN69u[ó$6W3~R/"X܉͌*4#6)suRFP4ID2ilS% Q6J(e8JcKAK5\+sZc+c T^K!f+!e-eZpLX 2*LXTgff-dZP#f,ER8i)3(NDIJ#6"ۭNTR [~ [a8Z24>LQ`Pe%R١",yЉr5u&Y1I4ٰ8XÜ'_>i⌢L34["cڀbeVh $I-stc-ٟufJE'߬d+a055<ž{3@wDjJ>HP%<!gL)XqZBS4QR%"e§bLdSlbv@BВ>9mR3,s~|NL͉4\vAm3r җD3GCnOrP\KC Sl(qfѐ,FAoe툒zkIM,)f\jcڡ #ͭ>j vhW {ܱztݎ )T[﫵خȠ=p=̝%?h0[{yv1IqW_sȭ>o抅UK*i%dT(ZaG5*bSY:eq3*M*[>nu-]xnI5p*Nbz`IiQW:idR`X\t"Ec.=7`@qAn965%gcP/z00yl=+mGqn+E^X}|Z!/I#q][!iqp@`Ùnl WܘVcM8ęU؀̲Ր]~2OSuqAGLCI4!Fzчjp*D޳B^#*Qy]9/yO _ZPt(\sӁ|)VHR&xO=:(;p-}rl,\k'>-TV> "Rl({M37Y}p%zbL']LP4'5IyJ,f}t"ҢnnOR`vz~mZZ~I4^0yx!!/S>ޙjӋL5uoZ!F[j!5vѺgOikV(H{MO7iX#uRccfdLZͅLHg`8TI K2N|j3Fح AKٻm%W ,C),Iݧqⱽ9,_Rv˭MI_E) {m A 8LZ@ /ry-%ҌzA`/ZRhVK `Y 2uǜB9R3N~9njv?dgяFj-{/%u538"];DZ q# iF *o c-֚Bp,H!"gsl4Fއ׍ 0~.ƌkZSkpd Xܲ  134\RxHFyH|8@2?U.dZ.sw$$~8_-w\/~ v~mmK#~ik6toäxR"b֯_%z3(3 ˅ƶJMZec:zezL/~SQ' y&eSb!ݻv-$|F+62.Tɣ[M4Ǧ$¯(d6Lr1H1gn-N+QC{쁦z>,䕛hMUxs`{7r-}F6ݦQ)/vhwB^v)ũ?ff-bL"ʜޤf 9[YY녾,7&fwJ{ܴ ̢Z~0wV|6 oJm#S4T+[=kzĚ8wPt<,UD@2õAy}[%=>WÓrs vY,Kڄ!I Yf$R^j* 8xFqZVTͬ^)%j-d*q9-k4fxaudFvp1v/V@hsnaxkP~mN~ۂ#Ȑ'[<)!cDo=IJq*Rv,ZvUbetNM%,!,$Xuܖ ~?X`\u*qR%0U'MDa$Ҍ^A>Kqm-Šܻ}9a?+ONm*5fU>ZFS!7K)4 ~uTfR!kDJD%݇PW,J“PIm!AO=o'?ڠ^(\MXZ MH/e!V@N.(B5 ̅qll~F+)9I]ա 2IEGbN\-HN߹*I9휱FͿʓnxƜ! 4玽f]!]/׽xڢô] ^b[ &][tO\i1iѵHp::⍊zjzxsJg qx}pE}pWbF)g/;߾ڕ㿇5~#&IS~<"*) 1DlDFbAGR΄я8 y}ݕ>ػW7<~,5:חban׋O{x~?{`-ϵ/8qT/ODq&-^aAuR cI s%x*4F3#P" E/ qU0SnwXۣ :?R`3_Ո\]sP&y+Ro׋8OT_Jg i-xDDMf?x:/_g$iWEhQ6ּtb`K߾qnDCY迻.AۡB]oBW%e;uլ$>V- װFl6g˷[[ir&@)upJqv6,@ BbIv]"DYLlRӘ?0oJκHvREtq@Z溄ʯ﮽S}}wYd\jx7.}ѫwWbSRYܻݳ^zS~iM+pqPO*M$q޺` Qe`jaT+MnbP1K(i찶j1Xi%RP3}A,&a VŒ:ʃHDK,%R L: `Bh 4^ 1Gs84vfRhE敏 <bJDXdR0l)%E09^ep אVT=c3I~+7ky&|!Ae'xkP;(|J `|\8edX`EXQD fB/D|#NJCQꤤp1s`0#G0rE08VX:n22 n׏џhvyw.>U~|]DTXrcT:l C[o F8,Ee_Pw|w|j٩뫛c)P#>_3K"D&8?;|3|O-!LK\2*nwh|sv~AaHX OoQxT$c KMu~Hvc==vSRÕ$I̠#4ֱ~SAbs *1,buByT3PVy,08/\]}hx}=*[iZQ\5WI#"b̹4K0GLa5~j$6H#mMxF4ĸF 0x\iѨ޹㩮oT!5mYm;8ˏ wÍd^2c:AʴPdNm ZoskxQ"VH@0jaK%܀(Lk5Fv x)xE00$#"IdOG%='݇C"Oזgn[6=Õu}xuVC>tdR򋻍gZ\)GXB &!tmd,|y p .oBn\^x [iĎ x_=aeG5!G'~=r'IhNI' q2gfDe9y8-KGh. (q1ZWb󜳑g5!A41+rTS2eWQB-}.RB6auMX+<8ê9odTkN-(.`* *5fk;mpXō@y+alrb5ip$0S `LVXEWpSBɡR^ĊGIMz?+gbz*X!/|&1<(FIb9&iY*0@!ttN(^ 6ޢ.CxBytg{o>:znu q8{2du0*KB4%a~yCGۣU*| eYxU㼖<㼕ZoV_/<wﲋtFxN8tN@w{OP.>Dn3j3D4)Fayb1Z};1hW+} -Ud"3'\~IH3@jc]gLKL)s=Grg mIL'l]詬F;]LF*Fbԩ]6vKpdU8cGbɗ%Fl B"%l+:]oǮoJ*1߅{H虳%uOڰQ)A#MaêZ1#axLA5榰A66!w|6dw ħXm!/e^<?8BBaЂ`S^؂ia1 L5ej%g⨀V"Eg>J-(@ DZKyVȟŽ:ov8_*-!t8懲, ?k ݇6fcÀ~9AgǛ[.Bf63SieZJƕܯ"/ȧc<3H9 @[ٚRG.!ZOM4ǦX#\gw9x\ RL'mtWԩy|Dևrͱ) lgͻ)\2w trƻMPѼ[@S[MMFswsܡ;M,Z"r3.KyT.RM` %ݧ&L|?'Id;GLe (;^kV ϑzLjx[H***F T8^P/g3- 3Eh? {&)DE0^3Aqo m-af93\tކD%,/ ;vTse%R;Iڸ\.%cLS; Pم}:ZyN8.r5_#΍؛Kr}A s(sчE&LH5 ,RY0+&/ ݗJKNi16!mxI &z;M_]ō%_)$u17 C$. JAk[I|w>A s;ן>Npf)A=G&Tܥ2>VX)h.5̲ gf0] WI }vEzG0<PȎVb9~J.+/,A7(M~_tr 3rWc|/dv`@DMmux~s79]n>wwOMe.E)#)轨u'D ȴ"=kI<(Mx4'B})վ 7>iNۛj!Kݬi>cn4\L>[iͽv:_OWmf~ =/co of(@;_QEA؂Ʒ%ۙ7 R'g ~~pc%uGZFluycjY\5TEǶ&T?+%`zjB{W/J/?Mx^McېgPA T/hI@'fD< :7 Xˍ~xgCd'|YA?1cN zx<X~C?g8tqHbzf.8fpcuT~Cc vn~ y"cTTeu*ڔ=oEڵ9BOC;1[7˞רN/@TrrQp9uU;Bko?trnyX_>SH `DZ0l|?ŋw\StM4S2|odrj wc|c|c|cA#eASXA`vJV8Yxϔ*:'?O3Ok4 ԎN Nt:ij7Rl(79ڤG(r,5(wrV9QFrFz7q7yl\51Fu+JPfw4޲T|{^ z4LrxvD (#V V0,Y;YqfCWNKc: αfGCdmAt!#I'598lVZgU=z\aErG"7 D #c\ c`V0O4T%B@܃aZ0oյ&@83ss>.k$"y?\=fv~zПn3QU ̄޽?}`28{Lj~j#~ 7A]Ox$ ]R Ng;`xsq-D=H^=|{ Λ/AmGSIwvieNrAvwVhz.j@ъ3%aRH,9amB%_q\Lg'.)C;6xI:WD`1WRU81ob"%K G &:y !1iɜ$bRD*Dp U0oo'pCSUu"3Gs0@ GsD`ʆHA\PN (Dk 'a4&phegt4#a͖-` bvT 9 ܊D "JguP"HXh!aR9]m b[ZܱXQE wͺa͙Q H@[ŭKe=D BF1 0QiPt{E0 &^9(s"J"xǼu \u+ i# :"HAYD1{̒IB1 k &v>.wu}W߳!fvfd־|6l@߄d)MjElF[> ?6{gvP0THRrBɚǴ(\"d8Aۃm5"dψ]/;MYfL$(Ѝe52B2<]vb\oq7N~׿:sR_J݀VIqZh2@\!T<#nMo3kDlsRdŌe+oje"&jϒ K;'dg4b|~Wشa,Cmr4IzT𙢅!}171;j?fa; ȗ!OuGА =`}Bp$ l$>!w{KP 1c B \B~>ļ1Bs'|R#&-ECJy|;uqrŠ. ۄwO1/ofzu3=~]7T+ߜG;2<jYh) 7\suu*wY0?K Kkn@s6`>lkfc;fОTLZZqLB u%e:<fpi wեS|A-I:GI@g^_2 e\!B]I_jԣ5 #wTNMIS:0Q9ɐCFJՉߓ{gz35x}=<p^0m3c5 9Ѭ̱|ؒ|F[:iFe H3 A{Z80Z(8ɝuTq/}Y)8_bH戹 Jc5doJ{ml#kMudqKQ )#3SzI$Ci!2)Pi"܁6G|PhF HB5PwT+iȅzK('3x#q~=6m*!nPQz8a yS\CoZQ:Dcq 5$蠀RE98-V ';p0uK~kh6X)LD'hmw;Yrp)2ZEw (R'*,. Ci0&Aj f )2=zM"A"ѬE|I$"uQQFFOZC%aSfΏ_^]' Shxwݐ]}J: SQ-u-!VJĂ^a"#U“3!`/U'V,k wAU9"oe=mC(1-*M4 Me+ZXنL.ޓI=Jk`SI)#u] oJ.赯qFt3tBukN.H#J~cՈL-UeƭPDŽ{ƍ6Cp*g{Ru{vqS*`(.Iٗ~R>yrd;I9n4H\¼EL"K>r(S.`/ N֎=ɔH[.IA`P5^fHsUZ"6  )4&8$`_BE+% dpW&|9?+@@jiNң {ʎn ⟵S \c[}.8iu0|"O#H%% ay.Gn 8X_n`#LzP27_pkY.7<@k0=[SI':EL6SI |i|-us8)j/3E#1U-2c';1(% ar&"+dAZtr؄/?o.A-#} [.kC A1RPL.(G i!4G!:gzR(?/>ďkmVai[>JaSmObL&Oo|;8-(-Gɷw{")ɀ4NZN,-<ģ9&a\{7Ĩ>BJ5 Ko(2IϜafGYSڹ ]HŨ J70:7;"Mtczw S9|#ic`:v|%[&=?;(ENIa=oЬY/FfQx%pjs\1LGxQ53GRA\> @K:XKBSK`0ldoT–+[P1"CH>!CP|gAeIj ֵO2u܋t>-mb0h" :ZvShWQci-X h]p{'ŧT,] I ( ;̡o‘ŰH>FU96/Z p b.+ah+nNWtK]ۚ pZ=c"QJJcqXMU'B"-Q>8 NS*ΆbΌ1io*aǐܺz]gneM"8v4^]i* Ԩ;5~ u$+\hyX<5|V_?穿ZC̶x1yBQ/na>5ntZ$uao\q]'Ar4eP ]vaSco] )L@@K]bXkUyQh'dN2W8XѩqkcKvZ>`sN;LiրuvR{ 4L`{n/SLk PTP EN)J\9ڋ=F{ z} ?Ì(կ[{sbb1\1,d ɨ@c*Q@yǠ!zx1\\},jo1p6Vޜcw0~U쐟Yde6|:|ݻE+8}Q7?_oa^<k~ܳ)-K,7!86)MO=eRҭ)1U4@SAzn#[h$0PU60YI1EJE^ Q, ,BVevJ ];wg_llwSb18k,'E9PRTDP{}2D,`^ x./]6$y^\r \9S%on|g) /1fgzWU&~TRS?}O8,Bld,y|8#IAWpI(;*<9uaqSJшyIѹ S!}>ܥ?C% ,=c0EM&Z8ɑwl'gAb| M HyQt$T!%\|0Hb@}V4Q}Եi)DGT-.= B-QqƝ SVyCJ 6?U76 ꊔ'J|W ҙMeV4h Hl+wn~vDy"ji,{jIlSSy>(VVCc[^|iQA-x 3,)8~AhBzl4]T{1-qo?*yz{/ [FGB|\q#cM g1zUU0ݪ`ؚ4ZYLjzV9U BG BV@Z-T`vT`@B9rX*UStuNQ`|5-aeejU r)>[`z]cl .uD4ڧz)68%&K95JxaV Ƥ٬$>KE3sdI;VHԘ5Z3?׻R!QjALRG霜U4|,S8TB˧ư嚡-MMbYa&UݚޝBZ֤}PJC߬HsVZŚCY- m9~qDg 9H#2[`KyI0Ԑ6-sϒ4$aY&2pgƜfʧ'JVYy`vvEp!Ӂ396E#x. -Z(B{JCS7L3hh_)D%(_;,T;{W+@8%'l7!O)k^FRqBfb ~V +=GT 's-̴T0r?|O*uc~->Mf3ŮŻp<h͆Diz~97S9MC 8ի="xu9~TC\Lp3dgTaT RPa׉|RTTwyb6/m6/h.[gUlt??]8 +ٜg3ի/r&g>,>uxj1Ohx$AƷ Ւ*z 9+R,u46uX[yj|Ƙ OVx!TKC 1ùEX(. f{ USX0,6EJIE P1:&ܗ øϤU^p*B-]Am 7B3-x`"BeњQ͙ɰg,N::dUj5w:KAh6 t(~^3H_]$XbqVe kCK٘b:C-s"8<3jIZd@~rs+$mFYVN>Va #&RkkppK5 +5E!V9aYSR0>Gd*\P!1aaV$+2yA:ps9M^vk h+p5z=va9W4!qkҀE GNt7 QjW|\-XVW)NF+,[?~~/)<0 4l/Ccu_)4f޼{&)xE}Y՞;) NYJ8`'@@j!r%1-2<Q psi%G}Ĩ%巫͵-XĄˆb[ n1.$dnc!H=q-FK8xd#X-v93}6:PAkXK!ObuDpvOֿbqU%bvʡyʘO=EhygoI~^k{Y,N$iѶDaJ*zt۫._+4"6P{DmEVJTSL!! ȭ6O݈}A*Fmorʹhn|bq͛z *[|p"}Y"I,4J_ZŔ|/wӬGO7]|4XxXW]׎͡¢:]TXt H5[\Yd۫tť}Ľ*n &8ҝlF@_X# $EjPI%8vLqVHU*k>ZTثiPC|㟣4jE,o] _q G~x]-:^Kk ,4!Id]F91qW{Kyf]O?ً2f {qx@Wqe&Lj䬯hu19rq蘎,WNP y'/Kjp[$H \5V``A$t>V,8.3yۈQ e3NqyNv/xl_0ܒf}%:DpIz& hP&sgQQ1hZF5$N Z}iƧZwU+23i}1#z:E3,3X/ &Dǹu2Ѿҁ U3i!}&Ū3<ZNt͌Uhk@.-mu6\jF2a+dDh9i.9* DŽ,'ef_rz"ܛ>6%%"eGv!'MKKB1bkfaǖ+16*0pc3yZnЫ]@:9ዋZ.Yrw~`s6B gZW`\,G|GA`ȚSp*gjMv/[u_ƍWlF{̊+n(//Io㕲9ԄgK^),d= aΦ_'W4clHs4l 1p[w3a֪d&bz46 L_-̿/tрj{-6;% wHm}~!fio ڌh2Y 8JT~sZzwyw= a(;/ ?y/Ρֲ+r߃[MPOA܆c1ߕgι;7`LRG iYOg V$P- &YF+?nJB`,-2e7Hg4k1;R~(OgŽt{q13rZ&qO];GZk.|?)wgpm pbK%F`ƪ&y<# 깽:HTpџn,M``JR*k)z_K4<t<ٻ&_2Bضzdoo򄕈U_udxL1t"9"i iY3%MSa%| `6dB)gg,r1 .irԌ5Vu5 iVYxu C$J` Qg%iπ8dIZ9co T 0VdUzZrkLr I1.MA2, "Lݗ[mDEPelj4^b#7bg6zF/|!XwTDj+D3p0d h'w/-B3bVyyxۑeHM"9<<>(:^HZ]\{H%3LlYv~ESWq>ֆLY'>?)8z 3\sLDAU(b +UUx5t?}$yTո\# =|-'. Z`B`zng?X#گ?<_:kf m& v,O(Ζwn&!XYi)ݧUtV}*kWnEt̎A_/$WVud yE;/NbnH9еUfu2׻xgՉmތլOp lB]Z\'T5]_~;IV @~ז!'_ UNɛYZZٹn?ks4'Ȣp1R39 r9V /{C_<>Ǝ}JےhDjVNs!@҉ˠ Gi"%c7,0a ~65LF(}._􎵀@ a>=O.l=ei=_5M[˵Vhc2h@92脲g|9Ƀ ;[kA!v0aՕ>UkC库۫._7 W3f,x R+oBK pd5m^r>Ç}xٹizqMVun#jB⌜eh+" N5ܴO3Z3#ztxFA"t*|l ]9ur uuiX:c*V]gͦod֠dVFj/D2GC_AJ$̉!Cz8Oɽ anj9_,ndȌwDŽ(Fi]:zڻhIC&옠 Ev#: ʂ;;tz>dlʂoֽ[Cxd`EAvXI{|eG׷5~CG"DVR+Ŝ3DbdJC}`$;9Xcv8dKLqIg9V3DKʷLk 3$bHFw-q)Ai :R{&e@,cF:'-m [J1Xm,jr9gד@t{Pr~*/?ˡvFq]A۔4umr|%$t&j:|+ض+;M eI [#D#sEr-M8t=x;#4c >G'RQ3CtRJ8? PvyƫٕkP) JCb&ꎡa5k 3`~^CNM~flI>/pLɞX hJ\p>x9vu˫q7>ӨWu}ZpKHλVJs[dV9*0 <6W-/7Ma?d4HA8CeC*b1~KHBfC1~1YLkRiwRI"棶8礿e<9`l;wVbb"!뚣G56:>/.[m5=h蛅#+{YSt4Q;ńlL:;}u[գnHGXz~bF ڊD(3 ΈM !nb28Qz6r8!4y@ tdl6snf_ qIMv Ƣ 4V^# `kr5M7ݤW3O5VeR>u1 .$e-B2]ܸ2q_?pa6OWSп-0pkUA2+lH|~DUÏ@IG3w0̏M,ΐc^b[*'bfޝ,0p Re7;V dWVLP:6~3ʡ0&ӚPhǕXU/XYlc4 }wSdo~ѳ:z+^7sU*juKU&cuM)a Je ᱑=nZݢZB8\ Rfsh}J!wTې'.2p[ ah\ RD'w&t;?햟8Q!!O\D)̀@neV}N_r?_<tg"zx Ud _/Ӡ`i6HbW+*)|2.`OfgG$qfA3XWw"}w@7K|a{TBВ^*2H} r]iC hʛTn@5B)%ɻ @jvj9P$v5;5ȏM=;%3ğ=uvB}>˃<7nͪ#MgǷ JL,L4^__-O|`4W=툸?^_ΜҸJ"*4.JcC0з@FWxl PVX`K7LY"@<0R "0|%rT֜Wײrz9Z=6.GU_OnJ|$D?& ρdG49CzA"pyR"w ИG\ύBmKjy_OB[ 2JYKVX/s  TTFRPq'| ,]ģvSW~~Oybr|zdKwz /=4И~[鼃7#E7w@F?YlWg1B?wtE.p~<#;h<_@!^}{tꏐ1yΩ3;]ODFY0P"+溒KtP$K0N)-; G[x?tECXx84װ$xR[v[ ^^@&*i]]C6Jր`I8" ?`,DDB^HX ;n)*>9|o0C#h!B)3}PC-l231 93'NF!CΠmvm: խkCĔ\7N w?h)8Q^4u.2eM\';4βcCUŹKJsu.,8$٨֪EZ } n $zY\v@fЋ,ebqyen2ڊkQ9zꤖ{gThH"916z^mk[$?(!C߮i}XЮƽsq'}3jVc=n7tfoSmthA*z_w!8{`hOV-6 }A WomN=s RL$|ۙ B}`%H?'sGLZN@! pM x<Z~x !Kw0dhfpŵ |)O{TTBLq}Zc Vbc VRAn5i"Bb7@HG5G"*u]h\tUCRRת.*E9RaD*(>Qe *1*s̵9))z ŕЋߪzӽOqYדc{+I/+?0Ft߅@X \'"Մx:.Ba:&/D:W/x<'OKByŽt'/NR<=/pRy1jyM"`h`IoK);jS`CڝLu>yn/5[J͕PRI #n F0%d#`Bs*abJN w\C7x,6;5. 0BkOx"D4+r^^?8/xQӟOs5PAOCYk5]!-NVLT;Ec9-dtjL2(ӯFLhd<[\ZB~#DW.M5ͶWN rU+.Zx]k\ŲIʭ0ff&|X8]ޜ@}?y]^j:*ѵC/eX -I 7 I9H&:V5>6ǧ,.vUzPxWcXVT\ד !Vր?'bbg/{'ht)!CM'?գ1W?į<ȓ(ʛ8< вgT:Z Q} ք M{=@ C(UvAg5hsqo GZ )G/ 4 X!{U*)- D=!Lq ( &] XHto,-ʹJy7J"b5=*SFq((:]$D NhLJ р^k"%EvMEf%I2EAB򀹰4rHHY\asVFB(Յ4"oT Ze=E8bv>*"(-Q́Q\g6@Sܟ\\b`~˿OBԟ?oYPfIm>*;5'0 9$*sBqc  -e #,UJQpz?-gd-XD}uR.nN_M. NY>Y%5Q炕,Җl #WhbzsqFڴ$(L*GŽkoTrzsf r^P=[JOl&{hfz],kژ0`4YNSMR#XiYnU¼}غJãT%l![*wٕ@WObU 4_ '5zL5"LDq9-L5zL@[JHJW΢aMfHvTmTG] xٰꆽיs”x|8O]+dF1Lsb9mS,ZGݍ4NV2k)Tykvv}eLܻNQ9 \%: raئV B)jl/"ΛMDUhCp5FMM3xm\A3gu*]A FY%^°ԫ=4[_}e5 xAz%Hh.)F8 ސ MDXaEz"#FWu)H-7k7˫%L)|ZhNTt}Ez"Lb-Jp|}~S`Yu| 欼| N:P>8m9CTc=_{ %WHXCtGB0AZjFpzSq:R`& b:Iըzߋߴ4J%aZuy,0-9e#^Ho]9PL2,0#MIW r:фPxSMF#&$B)&arq'VɁcNI&6ҖX_:_>,a{K'omgfF1kbMdQN)*H?loFaYhL@܀ǥ!XPByq ֠ &04Y2=& :N~K?^^}x`p ) IT3GcA1W^;KK-h+"V;p0@$Mys8rNZXl`5 ct%`QxY D ]}o9hi*j6D_)כtD]|d.atwR< )1~*9U >Os?{rf`/;J}*!UuW6I/FN/GUw׿UA9ۣ{eh}v Yuz휀>/U~A6jx Q+F* 6@_`Sz gXTùgcmYw`l"Odv$9*OMR ^" +D֛}=݈Sdt(N[%TxVA3Esʴy'UyCMgZb>%_t1 X*U8ƀ,Ĝ RV`̤HfFҧt#Վ%7BYc$Ģ7"AsqNyypD8XM!P4. b~'DU<~{qd%Iga!OD;ӓǶJN+9ӵ&xkЩݺxgz>,䉛h{{iv_׫,{ F); IܑeNQS]l tCO•t/G5 Ӈ&x+z}lD|Ǧp2)%;X s^,~qDVbu^udl3"N0' 8fP00ł 3/U#8Ux J_1AP$xTqbd@Y(7 㩃M?~MEl\ld+S|\ >XĞw?H9w뭩TI:d:g3> BSaWzZRrt/t.d 50 XLf;~@mK&ە_o~I/Ѭ4ɒ-;_pcab"cYI?ߚևdZi}HjZ7 #Ou0ouLZe,2GG%lU>Ol֝:)]}TIu4lY΍oه̴\׃ ;ı/'FDa2^8V{qA0quca4^rT*v};_|2=5exPNx|`ƞ{@5c(k86ݔ /0DX%t\{ WQ;nahpUrBTDۄ03Fy;x &˕vw5+kuŠIUg飶= y 5wz>jݍ>j}H3N2e75Yeڭ+%m7&i/Zg.dJz=٠@ٺ5E(2«# -4U[?tح)70&Č>QkӺft0cǽ js*ݸ"Q E;bmi?نNWrh?ysFazlyj]>~ _ZԻȌJD*4$Ŋh Xw_V/u/R:%#2a"U4N{.';SX3u^K~7İJ\Y6dh\I&mPxѣk!@b+д0aWA6Y`ȤJ zLaVN MّL2K' Y! ]ņS <z~~_$SW:M6r 5U)&%ahJ}}@o`WXhS_Z|τC(9 6XKT%RWK!Q.MՒJј׿`#_ /%eXT=<H68i[*8tʵ:!%8a٥g ."t2ue]d%U{\OrCL 3[ õ8 h -5%<ʁ-%PhH3V`,="R^Pr*//껬︢P֢g+yfJsҵ}?/EN>z'۹ p!]2{{Ϙ.ՇEs,_O+-ij=9z#,PK ~AK4` 0͙ 9Z)Pːde un~sݻ|\$ފ6c$`44q;BV$D BDPcJqa#`׹ZX ,&& Pݺ˦ZW:bB!զZWkA 碴u umgT )jQMs[G1`nRJs&*Bh Z[ޡp\k33%Ԅ(%M7EVQ&=sv -Ung= D\~ܻÞǥO1&R'> =E1 "h%uM>T"4)^G}tiO+~1e%Uv1/njSJ7'&d<)QXucPEE]DBX{I٣KS?V9?l@S xǬ&h\*ÅIOHķ97E NtI{.T{]Q)!i,<WV#}3O҆<}ʩ>Dpv~=Os@0H(vK`:JJxX\Ph k[Ilՠͱ&kZH) n)4Z2Ew[GD'hjџ7NibѽB?2?hId,Z1&p9†<+/b3<?/?tl5:ts\!@:APcts!` {܏ve+"?뱋K:@e= :/nPm%0I:/PvrI⃫:s(pjP©{,^LKݫ#QrZAp@z3hKGR¾$77nhE0@m oѣ4z,])ک{ߝtTj2SHn(\hr`0k"n\I> ç]NP5gA " TX$"Q<5/1<[if'm;&dOP-Z*aߘ^ޜ j2Ȁ{ۈ[F$Pfn(Ŕ}>(7IC.لGT7()b+ =Yr^tYsEO" g~ɗ}ˆײ돞>y??[~_dy_-]`5 nݻ$pzx^Z)eۑ=،g'ga/=>ek{? lq͍|(bފrśDmb}JbQN_~O`Fl%r? T@Z6dy&yTS tK|d" Ò:=C,H婈@>wW[1v|>V J͡N62߻ocm˞9jj5U,d %GZsmkn<5gе̰ po޺j׌>6Tn==敵v .q,Khvq> r$=J(YgpͿƨrC#:+CfY TPCb-d̥AA0c!FBf I9MI[ 4Rы\8ETI\{wFd@N NdZ,)@4&z2H^~j&Z(-pфR7$&"db !h]p3V̜{ҿ »!wO#8;^Mxl9r޶i UmB˕%ډLh^>:<ڨ9mUDG 9I%Zj+ԹGHsދ Z0wsݩd:*G>k͟[d7p;w ؝j<4Fڂ,^_di8;Sw5?yG$_lBq=C^T{<\J栽7 2yx|)~ZvβSfnQ?M jӢ$^]zk@ۉ1 TN`Ŭ6tXv}&^Բ틭\úgh`ܿ&đZ7LO $kh6p L^hQ~!v`߯Ճ"1aN3z{!M%֓J{tm,+x{c>B‡_a<Ä~7Áty̥ꁺ/wҬ$Huxk~?p^}C%\ jrwJ5n_Aekq ے;ldsONHt +묯X[kO|xyÅqP^?A)$KhWx9xNj.4iB:uw&]Zt}"d iPWm"Iy*a\^)5U/OWzyHo!W«P9giE7''ڮ14OSQ3GǑ]SWו2#UQ4:YJmJňD5#.MyEz!OFлpS[n['6SE̺>T+2ZUtJ0RzF>-Kgcz9|.gDCX+MX iG 0*#v8)(!j5gomn'j=zt7W@{Fhld h|lD[ݷH?`*S^|VUޮ/]q4׀湲xhP} ĴسM\@Idzx~#2gX1h hc٥ D5pR-C7S0I UJ^Ai%SԱs}c; NYF@ ଢ&H"x(,3G4 ڒP2>HUdqLifT (Kv^o hLIK(* XD&mVZ4RL2I-_oG-7؊| % #9A@tm7<לiDKLx%HdK3TYf^7LDI(Nqa A4/%Emm Qzb[x!QS(ɽ~q aٻp4Z/x?KW޵6mdٿҧ#U68ɧɺDٚ  Xe;b}N?nݾT`S<:+>̂+S ؿI+~^7=?<s,n;BI% B4J42PBSf)G_SXݰ4ILiBe\&B( B4\@"mch3hQJ ޔeiH?,<^6 6o60!]pS-5J|Aˠv*$b}f1}34d8#vIC-3T-2 jFKgތes:QщwGLmbZI[l~]~jOrദP!=d|X{D2{L|.UBԴAg)kNOiҶ|48?evbxYXH̴h$8@NR(3$߾m/wt IX$+D UfOtl 1TɌ)h̽A3 ,@$}{=P%SL"HDJ1 C9^TIpS| a@yL 8QPXfp8Mt fc׎& TcEfPr@I,3)D2uqJmag8'AvN7[-r%m8]чxueo~7Knm[5,6VLm(b5r5y1/zV x5\J%iRP*6-Wbgw_SnxǤ!o0m7>z#`Shm/_un>yrئ{~ )׈w~cWI,Ð2Ȅy{hĕ'O^{c%`"۫VvgvF #F_3K x#O 7o:v*ܴxb N7%@RXDVGgՎ1Hx?az6RY)v[ `;K;BLI%6ܮtp"Q"sx y|)~|y95vKD@/i-Z+(: tk۱nBxCf+ N t0…}B=c^3NH'<#< bTvv{pWlbF=b[GHg1G$u쐽YeW)-.!ع0pa'c:+ tstX^T6]Qrm@8Vg p2ơT,2fYsf @YENj H `pPT)A J4%:g] $qҩ[3е*^__T?}ǟg/ݭ*NE껍iW~[V߼ycQe.dބZǥR½:<z>[p.qYrpR}g1$jO+jU{A 0:TfNhh 0@' ΢X.>4H ԥtzɵf:flh3CLyA1AہJ!eSB1crHE55^H14jl1Jc= \ɉe, 1p1t.)!0$ƍ|TB.\CMx7_?)5ey4߭-i)k:g> /+7`ok]4O"?Ǣe~^mtTj,X2*04sNͨ6&UJ~e&RK?3ּS"1K L%8uml0IEԩ4kxB)d*% TkY&4H- "YxgnHQXFۻ (%6A۾yޫadžQ\vCڭbhr5:I%kHOafsC?>d;ݾ!7ջ~Uf`&𶟒@$*z}  *RJJzʢ$)exc1KfQPI;aW?l1I2w՘7#0G95kR\GgjLz.l\JkgCqnJW$t^Mida)x4tt6VSg!+|xk؉|XKcjT MuogE}y<9>f#ԝneH|Y^ޛ|)I}dt눘F2&8b@41 `]\+p6CG a~d4\RP LvbkqN6k'C'gU6ZdcБN<Ԧܼ1é}/L~mߜC)v4`PaMn& G=g'+w\%&^7M 4S'@<{ؓGǽ%o=EmkMe:>APቛyy5וm# yyuk@k(uIv^V&ZֈP_1y6ўw+|#>oWe@n6\ eEo>d۟W7Cg;"slQU1;x^c,05Nc)BڳXFZ9Fz^Q&X9}rD]ws$`VI}4yyܹc*KZ Ri%wmy-fE.{s_$77{n2l~a TBi@t,TEeRSͧeU8 z19h[q7MAzԤ!N141$Pˈb(xv+TSoSA:K AqM;|s" zk'IM;a|O[5a!.ֆ\' omuV}v\4cҐ 0YE~s_AПhNS`$Ֆ+L 4Joz(wKrT8r (K;u/ǣW(bVnY->U;F!U NY\,bT.ËI* bП 6H$}0al9=#AH׃\dP2U+.(=8"jǖ 8r6T8Y֗>[֛j%هQ&!=f@xg*)aKy ckFIщBꛇ>ͻs,S:I7+IH#$K1퐛0.zuҪ9¥DWRcQҤЋ R ȅYI>Zbj,gs#p @xwh,,e d;E@[>odxft *G$8JR%q&:it3\N̂_ӽ\S P:cV^zDPP 7~%M/pc NQkJHỌy*xy?Gz: Qx{V>Mœ%Ϛz2z嗇aa8V~P޻,~4<9PW9mŷ5wnﶨdP`wkGV lⲕo*{/q4&__ga1I @H䚣f®vӟ̔>( ylkQHR)e P)b"!(J,J e%p|煋w;{^8*- ǝYe7ySf;ϙχ@u2B0a 0aW/W桢I1t::w R48o56Ppgs]\֛J`Ҿ9XGIikA~vɭ6pSeѫJy\X) PFR^pC1)4g;2gaa/EwmTek?Ezᄡ&1S|lCO9 -gV1 B0\ 7XJ"м.(> (^;(aY:FWu̍zn-_6bb{&DJfQAe8*]DEQH@*&{"#!a򇝺QB!Z/N ~t.^$!A@~Ĝ^&A!Ҝ)Is^")ȇ w"X]2ɐ'A:ݸ( ^$PJ,ĕ;$|PY8-3ug]r-tQe$Xs ¸s8vϱs3+QsxzF~5/ ٘s|=G"/sح tcAd4NEQ5Nw -rx2:"_ur9(]qMG>gP_~4n*L_~ !A%{Sbђ*1ݹ*APDTZ][MtCqj[~e ʥ`c^T\; BD˒IV:BKJ&C- q>Q@cq0%5vw-I>5ɜc~=JvrF󫬀sPtsD,gDljEC˱V DTsi<2#Nf'Y{|8i6{G))9FiͻO^>aeY1F37z:1Ώ>`rkUb:qI=-0(RQX d@ kB.o].>tTӝ- 6UoTw, c84c,- 3ۨtvʥR-Z4c^=Rp5fB(b5H%^C=vb!hzc]B+9~;J+Zn chɞY ^9Vu\3֐Pk/-ՀݠuW/lةpEmbL-؁vaP27h]hI@q#qGΫچ)!Hf|L?61m>XN&7oP/S<+=wT1euy95> 1+/v:-V@^*Z<ݧYŁ01<_6L+ms~v然}Xu39x#t%Lm*gh~0݃Ef\)\* ƫ~iU^rs_m{] eEDg(/xy5H%7\cȐ:JO9h@_kzI|%@Wb磝aZ?}ؾ6B2U5D)ЊRUw wh;NPU4TCDf(ګnRug2tqk#zrڞz7QWƵtG ud0qW_nkdhY@ JtMw7*rѽrm51QɆ5(F>~޻/Ҁؘ+βX\vcHj@[7{ e}'bS5$muFUjjyNK_F?XM3"ꢷ8^ qZe'h*,%g-muжZg&x=ЦtS8oޞCtl>.:}bCsG)& ccذ'?[CPw"`](n\U3Yi&78ځT)Eҋĭ&^[cPY֑EU&e$ y4+ ؂vX%^ck _M\AP˼bue $j˕J8UK6[a%ndlDjK#!ʁ zD@p&"% ܵU8=5S2,pG- QׂSEk$5)  0t4Z"Ī4߱eQF̗c%T.8 IrOZ 蓕K1Y[!҄=Q&! $>MJK%rwst<WGWMI zJ҃6s1 fD &DL8oއݒaxP(L8RH6ib+Dˈ+ݥ._H],9Jg0pc7Ϸ0>gq>9Ez|oBcA(cy9P)hz_>}L:}}.$9R-d!~6L>7P{1IWs뙛R.z.㬤ڱ/DTeP}wӪTѻ5Š4F8uUޭVnmXn;^^K'۷RFMy`7rځeaKtOϯlgdT,Srt͟.~V"J\dZ2 8nFG>*sϗh,IɤǒqAaQZQ)2 |?g /7ճ5h K"b>e;n߾`_*"Uf2 Q2ÙI3$/&AFc!Ĵ \_y{(9IAS6xy?Zmpw=4Jyku~n3Ol|N˛k5|`iȽ)<шQ bNURnb7tУ*k:!qC,~>uUDND~Zb7h/.Hk*JkN ͽ:g5DS20 QcfT 061c`(@hUZwQK iU]`TףKníV}3Guj#Jo0&$*m+r WW֌JpCQ$ +XF@CfnՂ9\o1fbmtAzZt kWr^Dt#C$5o|*]EiV3̚u>ɘ5#l3k7ju@+{|CS:FzjV{mA0YD qiFPnooet(-x ͘ւuO8:'JFAܚ:m9~ 8KC<xp;'L MF0:Z:EI#2uӒ %;dbFZ0e2H)3XrJI8=e&5L2MѬBF!< 1)c+4 S@\^Ec&%#6kBׂ'{v2*~d"7d Dis[I37zY>(IQ  h^DjhXC;U\r$߈Y3\`貁Q*LNHrZbSp.ʲԀUTzTVO)p pP f>}j(7 ԩpaJA'ac=(ma;bLIU^؅urD0 gY16QIk$ogl@-vƴLaA|1P02E$[ghڀ-޵'Th%B6IRM4INA0^WF09G6"-XaY X LE\`c$+W-8RLw>FuRjUHf, rpYfbey9QSX"ˈQǠtgzcMycqM %c%QST[s,*j:ٮtRנ@VF ncV1mokEaǬ.:d-cYz.yT .GY[xsiQ\e9vT 1QpM*ަpyC\y ks `5f A;y9/Q=rVz5 20^ PՋv,CvqJ_FԄ2zf}GT{~ixťsѷ77rltrr|TN}4?7?lQIԫ$&jZFooF7=%}'>~O ъ~'(ax}^{bLﰉ d9'M m6޶%QNV}5Fɚ/p>  \#"ͥu$q1}:jhJ.Bt\ir탆fҞG MSݚ2M֥iC002j}C'ZƠ+oZMy=+oxnkdFg)A)kʵppSBNP^nӏ>/('Iz(w٨ǽkiq=()]!O:c3Ȗ؃.R5{{]0ڔ Ft̩SZg;}*7{""OKR9782:/-j3\N/9?? )8+,&l'c|p"/ '.‰‰v xh)dL(pIj][s+*ݭc Uy8hj9T/rlvY#kFN|ߗ;{2-{ru{˫]֨kz~Z5t7,W}닟e.$q3Sx/y1[\~F#m$P(pRjT*F%s9<Y׺t1+ {Bsp:M(kTB +Mj~_b:x~7et#[yi-7~L&Ag,)O zn^Nll>"SM=({[&%\9ϻgޮ> .޷cԞɁ5Q.u!_)w[nQcnmy:mĺ.b2}[ ͵n]hW-[boĔs,> * mJFrj B+Zk*""jOH#T>g VK8v5l9%^6LMl0Og2o1d!:Nё)?TbdO߳Y LJ=q𛭷^}o vқ/(ԞǝcE!me WVe FxoVk^9y芼5˪gHM@zftG5jՆ}Uю F+㈇]%,9Ez {I$MĹ}ĻdM' 0 bxvS O"| R !k0m̵la=#)ѩ(M}CVgCQ3cRn'60oF3I+MA/C5 Zk@L5 8Y p%:}l=b-.,up7#xf5nh׷#k8h6 8- lcyŹ%@ߛITGΡ:&ečG憀@h3qXx$|P_T.o8qu k(')hlo8)u k($ ƸCd}@96yڶ\;B d){Zf\LGgPIX[q2'Jo0PvJҰ& 9vhMtG[JG ݭ䄦5}jY~wXiݵ;j ߨZ}n|*mJ"w LG>)Z.!˸|uYڹήM(fwx)2ݡ)Is .BnmCRͼ[[Oʳmb 54&0~-dK`Ct[Z էtLďmLr+tMgYx[hn¡Lr/ x;0E6]bŋ\z%U搲u7JYK5K7?Md1_ $^#O_%K 4׷5Vܨ~]x&%[!ٗH,^"=;SK'EyOR-5[ kO%'$ҹ'(\K"| _?˧\giE6Q?S,juT*hZKm9|D*5%o.)Fi|&P^ZȴoL )qBĬyxe f,F~ ;,i*2z9N&WG0͋L"Zr:,hJ߰1啯4vi{jW2[A 6t#dR7x&fOpY^C:/}!uq%Mpi"Q$zhghXjp'g[Ct*czxO S>3|3t PK iߖ~WfqXeQZPG7g,Wtq>үI':4}ʪHA[lψWYmJ{k\ψj#y&\[QfG; cb5=j%vBxL#A<rcrJ$U#˴Y96^#/`:a*/ ʗ#鶥7b-ۙԭ_Uܘddss2 pwYz7U %q?'g_XR\H`@gWPpUر:Sj5Z76bj #C AEg:: \a/6#0ώAz,&sI@ f\i| (FkCjF*a(@p4e `pfuU2C*)*UMJ/H ŠCƫҾBloUUh ysy0UiMOQlx iL'.vܛX,JD> esLIms@! &oAKQ`J!l[.jaۡ3 M3ԛzf*s)WY+]+jj#5<-iЪzSj)<-ˡZT<- < jɅ91~Z 2OKAN!HH.-0u%X%Zr )414 vj<-G:՚sÝ03E\ BK13^PM`}yKMP|Sf绫:-|ͧqM+|:fnjF+HL!uVEJL]? ȸf|wc}X$zOkCcC}ȅ+w0UՇ6DSBw0V ܹ>hؕn0 vnSUJ{mtbwwS‹BJd!B0UTUxG{ڽz|ćW9wx/^0 ,x($Ph7i ܡK]˦vxB*…,$l`EUŘW$_|XJ W7ZEZ-ZDu#IA/#:7X` &< `VLQDq[M)!!-{.Gͺg CCp!ǩ~}jy+G#$LSɫbY݋PFN-?ɳA htx9Rv@ w: oslJx ª@ Dx1t0]T]I&jR`Ǚ 5c^<.T6~(կRM9~Qp?+5,vIXcTK ʳ~Vj{c qoZ.g„ya2I'ӻI!HS ְNCEN阊ɘsя>+׶gk ֎gbirzf6 ӨvC8Upd 80CH$dq$2PQ`EEG5*BΡTJ2!_ly~%ejjI(z45y^9rF%I|ARr EK`P͎Y6RV2-km6[%A> KETy,_oD5*5\S!B4$"(@2B b()`9"bNtT{PT URS [ͳ,椽 `Xu*>dT;4bt#ՋMX V{a𓜨Sy+ҵwQ/ YKDC҇P.(,1T9Q9qE{ֈ`1ֻRspʁ xjדjwd}=p1t3uEؑGOE֜Ya ـy2#3_ lT͝@ [0gU/mԺ1aMX ɻ47w%vHW#"4Is9_?8@-n t+ĚCngn,V13!VxIV 8P $٥%cr$6`^.Ox/l̇!Z#U;$UG[Ru EU %V+ DցvvtkcmeJ mTvP:ׯ0 X[:AP|TuF1>5̣3_Hr~:+\0ci =Vykѧh.:H];P~Db3g1wN-콼 Sik@Ӛҳ>+`z$#;8>tJ] ݶ#[yr1Jz4AЪQ[CAr"ؤI,vd@VeA/B$* ,[(n5_?yz7#(Z1m56ґdXmG4kAbYlXVV)`͚ >+<gu{NGؠйnGn6 y'ym|kH 9*d3T`%ī,+~ H ӱ=ucሕ "}z$,Hr.-Ţ(H3Y'`j ImK#r)(9CуucDx \a: 6eX؞h /(h\X5fsLb\J8`sKuú#e4-aMO*T8.f+&c;pmF-oL.a3`Q}fVVe #TkҮ.:d@\,[ /VN9VM|;hu"P(KeP%E'Y&(|PcԜf\9oOiZ[ZHm6KE))y DkDJ%]k5*1_G/nD̺Ze'P2v\inحU'2AZmLhP5n6Xoѭbf%>NTo9( qGD1b!وCX,k玌ny_SpDzkoӧQ2.ǒR &Z^.~6s ,r:~B/ @kFข_vKv˛wg"E{[OkS>hؙR'){ڝ@g'r Q`%IGp6;HY9JRjxOv%~NwXEO˷g\qg\ʓC4Jn+w>Jc:<-||Vj')TRsu1{֮Vd[| IrV,tzЦ˂š|ʫ8_$Jx*Npf[9=aR"?XJ_L81ަܸCh63nb&ṝlNOgW{r½{4!1|;`ԊTVWdAXТ~rK?Btv$L۫IGϳ0i[һt{^rS)Cs/7O7D[ߍ7'jS;z'vO\..N!_ ~1 k5j<[Eju݌Wjuԉ+oQ ! 'BAmR})>lfխmY#^a<g9?r;?ȟuv.p&;Z\"N3RFepf[q/!Aâipztۥ^8\"=jn" `ܺM*?[#> nogwmk q~gA{`~={ٟ~Iuz08:NTʝȪz{"IYn5/=Y#kn˭YX>c}nUlr9Tpu֔פ [umTTCF%1x N07!߷&/̭7u A'Q\ y.0*n7=Iֵo<{wӛyh۴WGYxQ?\~ _^e- E~K}|Ͽ?4%h.K|Zmɯ~9>i=Nx9]~Zϯ|xP/fJr29}oqķ}p/{ȴ+ÃIG# vto=l\I q9Zk# <#IwŠX| ދ~CziB7z{xx;@7!Q;h1GSBJ6 tKкX}VxֿyoGV鿛 &Uv4wNd=Pf6s>{Чj f8(*Ҏb ؐkf?n6S]T2IUPŭ~.̶nl6 B񾩨.l:((tkZYaQKk:Ic4dk:.Ya8󎗒ȍ $MFCZ%CA*UWb! [TBS&oۓг2k A^-jM=U5IZ-k,h&lO5 vb,Y d2d_@zO(k0rf!W/(VS k-$rUo FМ]ՠ),T:GNG5wݏjGE?y9{_;m6!]٣nPoQVylYg[~4RiɇX0c1޸vEE3@{`oH #kQVcBRU@fѲjOL C3sdk{k0~G}eFs vxyH_<A[{/G]"=Q]wnX״Tsvx ϽPȘ5n~a2hF^mÛڻ?~pWtȖwAܙ?Iu+(P8S8m2}g .-`0 ׉?{qnmfF3ʛ>Cjnfd`vcߴ)tIځEAR1WYaEeAG^`]$}#2Wׅ&TpOBco}OL%]T^ߎYK:3?{yy98]^Pffg߽x )OŻ&w:N&tv|~*vu@V5zyѶ玣;]<͟^׷ɭx 4g5?*hߵJӸ<{\ˉhZ3PͲ.lϷݺ#h{.Ɂ"[*31EctNbA*) .f@=j[ $.E)kkn8ŷ,k/RllR6D2(Ż<qR,<<oQeDq&pG1˜Le" XͮE潦@(1擭M|"/DDɛ|NBEH6Է檋CVըM bG*Ľwf/!"iR^W7_v=taqXY\iEjqᾃ@) N,fd aT!lȑ!t-iA֙ӤjH\zIhcÄ&GH~yϕ{%Uo^&8iJ;&\KIE4b缡yB?rM,ambŽՄ:缡q}7iu7`GSUđgH}2Їw*:5ar5^Hʂv^>C WBn]嫘6VdM8Fg@2pH?n!S!S5|4!ͼLM5q%e8r3灔B`0sʯWd.iV1df}0)'ZN7'yu9su*`@M }BN0JKlɁ7eyKMvYùɁk1zI,CT)HɁ]GT/~Iie!8%t&X>>1:r=2NSEg +d,by`8t>`V҂II^羸.fU Y.F׵7!n5ik}rx ofiZmUxxo>lVk(yqe+^tNNǧuAj:6>Ət~ӏ]n]]'ANY˧ƁT2@U}j:EJ˾&.L="Uy*ɤxG?I(1nj8sBĠ1ь3VQ9Th 4,Jh\Y0MHVai\-ӦmE[ςU[?~woa JQ?j(BkM?>F 9!t\)* ߉@5+)}4'D+}4K Ƽ^8(ƅܙȗ|F0\|8Xa{j_fZ(r/Z @TRDl,p+!(kz1ZOy슆@/fm9q/@YjrOb*F41 L0ntHqcbl>O{ ˊ.Q`{1{w=F%g]z!^0LNMs$;Ÿy_bJzG.ZMMꑋL  q+ZgYE_E젂6Hb34QxʃrC)i\Kkm{TJHG6ܴr};uȪ";)%Q9\ aT ;e09%qV&FV跔)w>Kayb>Kb$Ɲl땑j XfbP 5Uŭ"4T&ĥGO*8Jc@W^$0 TR] M*Pedt@q.&HA԰J.!P!"^ఫ$78z1J[| TPID 4JqPZ뽪c4%.VvRG2z#,(S c 7F ™36 •BYCs鬂Pӽd3,~yڇ/b>r/'%{Ayd+%aFPƘF J"C$L@ a* Qb.mY9rt)ps%PQHIaA Xڊ-Uዶ2h^m߫u6e^ >(f`* BP′J%kJ9XGJjM"r?S~ǻn1GM?=\:^ǒFX~z!oaipnǒt,)BTᦛձ,>{ZRZ)4PgL\;},JZR[IC]MF5a7)_f#ūKQjY͠MK:kI9_?j)/o';D^YNw5h$|ݢB Qڮ QT4΍%^΂P20[^1JiP,VM- ;j;Ry0v.mQ&6<mǙKwF rBUI*Npe#&.hZw q2}##Z6v=׈$c`\- h>=<^Os$-v"%a6H5; ttTIck7H͖74kݳG`tq߿d\ޭ{iq0-ẺK*s_,^n$`r>C8;L{t>$t>$t>$t>4ټ~HxWOkqK u+-ʫuH)ukn5#HAOjX,^kG+;9 SR9%SS.`&?)Yi1G$u;A2/ikOl3VꌬUg<~j14~-6x[]~ztS[8{/ur.+(2hvlkQpF'wGs]Ə@ݝx`͈TE-l_J 4N-7-Z߲'5̋.ԀZ6&t9j;N)i{pa9Ӎ.`E#3Wh{1d4Ie}2 wZ ˉ mamx?2`|\+q:uӡY|ܿ=:ַ:[{7w.fO"_; \z.ot2oM\e:1AzZ>? ,da Nw_S%*P/{ET#Oݺ u~FIt;޺A4ZtNFEE> ߅yrAQ{SVabQq'@VuP+2:wjr0 0ƴ|}HPx{Z=9(xg+'(ya=ޗI,eID& y-;4n=̍HDXi\IY!yX 옠Qb,r#[ FAiH-}@R֧jns{zFRg GmK)NQk\RR86Ekٸz L#G`Y&#³`ЗL&iR{KVRDWږ #`e]Ԫ:էzn+篯Y=ieC˗'oKEq/]\mYh6,6M%Ce I ιg`^:%q R,1+U2x {nRHTTX?d&I| ,;%d v>*ך͆5Z2GH)8?Kv7 T[V\+ee=KU݈1U Qbr"2c '3NېdBlN8D)q)ES=h>6{W TKIQbrj[9 R֊{t}Za("I14: ?vÚKq_z#[ahN { >` }قsne"&{7}5}Pd!ʮ ALl:|0P@j|(̔[.%+e`J21Ry]`m%lJdtB^|[Qf cPFIedazvBPrs:(?ʑoxrQH$0fK: τu(_ կW7*C>7@j0by:qv6HtDP 3N~=~UDq %¡:@#%6* ]qm9 'ۘsp,_<yok]p?)de64?_=[m4~j!NϏF2dt.ѹlm2V0u(v<4KԄ2`Yk\+&j95*쯋ԑioioCm2^g2]5?7?TW f쇎|7!gcb(s rkۨlL8BMx ?>I(ѫ-&kC1@=&Ga)|Hpi4B.Mq}VHSl7j{xYJEqCGS\C$d7JH y4͒!Rxgyz Lt4K&4%0= U]%n87z a v;wu6Uܨ]UKHX:Μ7q;ظ淃O|Vtsů^RUM9Wf/HLS0c ɏA~$:VMw-}nK՟U\zQ*FQ#H7B^CBT׊b|a?1Ŕ NXlw!#RY@K S%5[x-`nb|ygjZ2l8/v*W\uO8P']ClJ0Q:%3C:e0SZWoc5  MVƠ+k/i#uTncL"ȑK{[kùDoZt/+oA~]fZ,R V3fDw4Rj2L'ˊ/IU5+%ТXKĞCT*bZ1{aK&0L+qޢq`02ƚA)ajx, #ݥinR{)8RrFFq undcC*DčŅC8u'{ak6zz;.KH xC`5`̐͟c'cg%xV[nkޭ8<xp?9vD1Z0Z#uK؆ARj L=hH IB"$27w7|/)ߔ.}%ZSt<"~檻SxT8ܗz(fXigj/$2'>X±U 7pDPCCt(F_{tM:RjH0DJ?_ RSw5, L7GKfFS3T` ƮUqGK`b[:zX~\-`ݭI~ӓxX: *9!(MӯH)%1OC5)wZeտ$J 譄Bd X"j5fw*IF:Y G{2{țy+p)|`c1 GH"3I$.R TydxRRDNXB_r0t+\C!T0MU:*=?rpd2MHƃ0Ƈɽ̪oባ2(As2sG4"6bzh r,wiyuWN]hgܔrzs=֞$ Bv Cvn6ؓl=Y6b +@3!'j"Y}N۰>p"ŜptX]8ZzU9g vq4- ɌoL(@RߌԚìExVES^jc 1պ_ݠ/SÍf64ݪ#'1;uN_Lv3\&K3\"R(p+N87Uǰp̛ *&T5uU̠U|)Ko65 '\̈ZZ;}ԭ< !9LTlr}oƯe[l3li.c@=JB+R:p/泉1rOͥVs11|M.ו֓+EnV߯R83 xP&,Y5NG.?k" C;O>.swjZUWw~q*Αϫ׋9% Fw^/Vb5&No+a Num>JVXz71=geRVÇw 8@h;}YrBe!ŠMU7!a0^5Ι'1o :"EtBT1^R.6 ɓw7'gh&AkdGr5 K@ink{SF3P6j赲C-5J&)M-7AE{Q2~lbW֍k[M`dA{Yv憘`O .A0.1-`Okii]$O'Rq(:(*%kz#!wˊ EK-2}XYKrz_#vXӜ[tKLp5 i܉/ ("8'82O=TnڝQ.5G+&(Nn&,-8d7OoIRy$p.Oq# <2Dus5.qo9۲s/P3h2rt!:ZtD=8sR}4(@9dPޣ38j?ߵxpKoPؽx^OhnNansq:pVWgnCҼUh7en3B pL~!ߥicK,%w]b ~(AYl?e'Kt ٥ 'Cwn8R˱Y*Ɠz.M@YzKӔ!5Z^hZp&CM)Z7lcz~c|BM@ b[B_}N|Q{we%3IYn#>l6-訵qg2ہI)x4̥(6bO\.[c!՟Usu^ P* *]8 .8{V+ PO ʾ~u}@G&mz}K@ 'y׷lg|V}sa^X]}Dkv%X$6BΈY(0KVGI]05F\/S˺^_/Utvʐ5V?WVf3 (p( c~Dyt-& ,&PjCuu*vN[8HpZf5B6ck̷B?v T v @iI\{(ʼnOt0?cais(q޸ -sfv?ΫD2J ndb2_HтNW?r\=ΕNIqrY|oӥ1Җ/b&ާA,ﯯȏ?V]t}E| ]e ]e;p}$J0XςY8 K`åҀ@d"JXF 줖.~bӺ%"76!Zf_o7Ͼ?VWMw]MdOPjN:|=2\h3pW"IoH߯|Sguox.vv˴]wPuSk3 HUpЌFum5'IA=^"3mD}?yv*jetݻ}Ud`U=O), 5Z 4kW CBM nӺ=Bj)eR5=ܟE:Szlh|\oNpQ+޵m+s~JҢ8I=n%RN%oPCkzɺj- gpXD &y0քDDq!⡋u8&>ͻ2^сfP; g3^aV=#8%;S[ɟ(WM=_y0a6IfJ0eO2^Q$`Ux K]"n/6UIYT<{H] kQs'IL#8\P) %[;#$K@1[^, iX>c&3QMd(_^}<+k),('O -Y 5 3"84Ā]P .j,h+ b6 T.4<> \ /ZjKq+BMa]aX?iO|qR PX=5L3N]U ^wWIT[rGW)D7w۟:4;g6ӓ೸ c"q< X.QtstLt8M! 쁢2( hC{Xec1PD?5[/K3\_7,P5ܡ-h"]A]ZUlgtw0@yWG! gp?EI:z/w8m9b+aT"d5BK-ǫRejk;0CcΩ d"q,uHq#+Vr> ٿ_mmFݡ*ydC߄ RڳX*a%¡AR2SAb"Ƒ%(FЗ 1 KI!l8z=~ D{2#3;P׃nGG/w`*/J~a|JglLjtȼ!G3vIM7+%mu£V7-~bQ(\}]?WyEZ͕bIZM~{r} K=xWFTDCx4ӝ8…Ш[i/uNc%.QF.~G-¶~sE9IiY^=q"o:tb`jAς˯7u=qw&>dr6͜Zn^<:`k9ofN'[a2L8M\*U"uؕJ,`ptnj,T},l:qE$>{<ըFuF]<183*bo%9*Q.7"VzYā,`Ӝ,' ?GOGibZ WPops޿o.FX6DǏ)goOlZM2WU"}<<ф= Ml&.h2\50s6@_8fcT}v#X$H)܌`n)ؤ鯥"0g~Ce@.=ĝK PW.dzvvS,[]}TnSݡvƻvkBB^֒).|$g,[M}Tn'\ET@iڵlj$䅋2ń~.lk_G rL2؅M緅u e7Kdr[wj!i!"=$^L'$ûCVv"v@xT6l= *| 'C3oF:1|9Ve1Z B p#O?RDr5(_+aM멭┶\Xx8QE%+a8s~"5G< I˂5!0=GúG:{՜º"`l2S5J :8#3v'C_V݇nMgGh& ZD>n쯲 ?is+~c΄~4 IQa뮆?y78tvD]3l.kur`5m؛;:ݷ( Q WO xy!xBĎQb%hS1Tt#\\f%N 5oI W>{}"p"Q"1@b݉C+Epޑ aQU_y)j<-/ veOLw/w3vpsæyO/^.;4k|?/qD7,zx_/bIVV{v?;XWS[2\//eJI CjyHD: @ :<4$H\:o[%bMIa$>=&}sk`'i35M̮_-򧆥\-A<NAPQ`ǒ̅"%0/~g }9;'5-a9k0Zr|w*P%W-rVw. ږ|9Kٚ|9;%y—#nTE|\p׸-/i_NNBn;]WpkP3+0+lV)<$D \twѡˎcN!`~95:5% lVr5#m˱O@1S)eM1Ϲx^Na8/NMN.';B(%f,0/g+Z7/ޒ5fJAueJ%S 9".)EEr.>G(R 1Z ^{4_ԗBQ/20O  #3]MK"}E֋mQ-D$P:ghl!(ΔӜjeƘhBXq # $ z|`F_!YI轺]ub[WcQ$wۥCv[KJ8 Jf2zoG1aa$јs,r1: eep) 9<]c*ckJ D}Qņ>]8r^h9ҿ -=;݌X de7؛F!$Ut>(ŒbdM_9ADEO~U,IFv[S{7ۭ0$;Iar=! =3 -ɂ1 ` rE;ړHL~nޖnt]b`c?Z94]Q蜑&6t{٫/g`/Jc;#ae~q|bVgLREΌ/] @*jEW3\`ِe@jknŬxi]D\WW{f:po3;04 ' :OqHb2( hC5+lV.̪]_]HK.r,"4!$nlBMlӈcWa2:o\v]a*_$f[xD3áUCa d쪌ve~Wؑ(wZ-7nYa?iyklgrob>ΘB pÃ_1޽ِJNna2ZpZ]zF8|~hO@IIdh5!{t(s(*)B~of٬8O=_EYvJ;nzLPDfI{ELFAH? x]9G't0}AuXp3UVAxtEYa٭sQ" Վ#h9?`k6>~|^}P|䞠k߸$LvL1kZEeOE+7ik=IposWQ w.-@7$!/\DkɔDgoIj7Q;v˃2S*tށTδ[mBj&$䅋h-='MaAAѩGvU"wjZV5!!/\D) k3Ym=~4uU?&)&w;?4omQ)>uR23^\7?M5:v*)cyh !ёib"(b,ۖ Yrr8#"dn?S? s^j)Sϖ/*žEżIʹs.@Ŏ: D*"ėia#bXvOjܹ! t[@ E+saw2`$b!1 4`A`FB*2-%^2>㼘NlCoh&\90Xx!ր2~7&zkXO.uy2/`3כ<˘4^_]ak%CͨFJatmy٢!A8_nj0YB.#oϾ+;NHw9h֢ Lz$hmnF/{/򇜣ڪJ*~\yd(^?`HQCjH&[F?=eQgϭO^ Y<\/!邅vL p: :К],@%#?OI=EV YwQ`ATqgihX 3d\YO'.yF%_Z>mMjjqu~uF7\VtLĮHKVIt6* A?XNvv9qʲy/[흵.~j׏m滻lywU+|\U&Gs^vCv|8mGtbww@@)[GC \'.UL*='0&3I^$"DۡJc.XM(2JWSepSRyYR_nK\a"}v͸,xYe|ne^.%-q8_,iul _ng+{ݼmf?] bܕKcVn})uM_gc( ɵ79lĊ%zY36Rg3k T/㼉H6g~]VJJHR{c 9KCό%ӏ- V)j_Q=``Q2=,_w~U:w*xW.A,s~&QPÑE݉QPs&\vhdS02$Ijj8{w)C,LU?v9"qw=# ׾KS8vZM,$lMS؁<)Wm' `Iqw#jto @/c'{o޽ b ޷7 ^i5~ ˫G]C*ΈB'Γ ꁿ

gǢܤ5/ M}h%kV{O6&C(!ͯwYqh|&Yõ~b]Y)'&X?\M+N6`Tܙr]!s q@PB $$j9` q%7XS;Ǯ4@k'p} A6,Y皨pT@듍\ΣIkTY2' :E5ޙʮb76h}z +UX/t^-}w$ş.Uvcb_z[n>]w]8Fw5_[; |e+.j"6̘ށI^e hs+SzAl$HiTkv ӷGtg%zia~sMՖ7T[ no7mhyYYT+aLj7;S]]AmT?<`zDY̗O7w{^Xxkev-`//l8{!E1U:1>VY-n֗t0JrTWFx1HgL(0;8#3ƔCYD콹i ?" =#6Ft FˊzkJ=^B& jS7Jfd@;R_nK-Q<n>,_iԴ:gՓatUoٌhMR.} \ԏxwg_m~01%95f?ǿ~~_!Օu-8?'o8:9I%X\.&Wϫ{GgrhyԨ9_L>͍}X!ڑ1kK`'CWm{(H[yZ!n.&r[jYG Y)gJVIX>/@hmĹ)9;JN ;!Ш |쬂 ;2'nRlH]!jZʲrBt%%bUژ ˪R9u;L^frKj?Gh #r[ja#RnOJI}-F+=c+;L4v}yQorm̪%uz˷搮;Viޒ.*М&mBDsc]^3 tNoԗR y;=+Ղ80J|6J*6{o!xރpjhTGoor7ޤzp<襽I5Ҕ>#ڛ [MqXJ8Ejo2qۛe5ol3e]+WP)H[Ϛ!.@ A28wvIT ô|ʍꡄ&У-Zt7 Ǝ[3NzM ֥QI;)5>7ۯUGp=ISxhī RIG&;4=:JДF. vR$.P5!I0Fn}6G{j! Wn'#  bYk;sDIɃ_]0 @Y}WYaCc) 2+0&xj Tv@eexx>Vb+ω pl׉{}~5b舄S x #>p Ֆ\Wd``Xslk =HA詇A$:vtP`}p NbA?$`F["q# =Ӱ9 `#nqpӷ: -?\>?kӆ܊E#֏{ )%Q &Q{ϸ!= 2T'^3w.{ ԟϟKfyafqo*Lmζ?8д)+#h JAUI]|fH2eUH2PPdd752oȿL,HHht}ћK}XL\%.)}52֬Iˏ6rZ?k_#i7yS*wWWN5yvumU̍~-1SLq÷Y}M RJJ0J+L) ;FI.f԰\)b CWˑ%Dpu[wFbwvg>iJBju }"9G!`،@POy %<4 30@qj%Zľ זL\9؆ ?d5IjԮܖZj.Góf9#v99aIX>/VLmɾu4-EVJkEosRxYn$aV9]hM;\Af_l cA3тA { R;-Ok?EQY ZrH/$EkSI ѱ8a(؀tLJ0t˫W~ ižtK1z#C{;m^RD?D'ᆶ' XC$ЎKuG\8)IR>yn7=~yY0NAr>J8TKZJsUkI99F`ʁIi=Pr!|G<A^jmŠ<.)F `-;h:ş8AbȬٌ7NZa< 8)Quz!ooJo7T'[YuJ6`%$!Lv@YEQ'-٩K%Rr@=0ՅPSmaQQS`[O>l-;$=- /wA8'|/T^j@l7 m5/@?( Q<"}/Y5J I {| pb'&Kk;F3@*"#9. 1N# }" >5^Gq :a?)9HPf "SKq  epPTks޼6V*%dw6"93x3-P=2˾e}D*X.uYiR S H`D^XQJXьb PT)δi0ŨdYYb2U(+DnC;CJ\hU.*L?zۨXٌI s Z@֕[G*].zRkKPN%6prT"]"M"/$`JrVJ'9}R_>Rg vΰ=_ Zj;uTmKQ-! +EO_ZK- *Ϟ0n|$tԗR5ZY[)?+e|)jr[jVzVJ9:$tԗRKڨ,nTJAYi-b/=k+՞Gczy Qx:+՞ҏRkoW_/QHꯤ v@mY)?1RNZgoJiȸ $˜6V5RȜW0"7@u)0 EԀ "RTG6 U]ü .Hɐ3% xUM,הV`$tk:#!AZIDЙ̨Q5*rYj}PcPTĚMI] %1 ?jbzcg}E4V.R+^rώh3%3\Hb(D.]f`"@LVժOŴUe 0[} 8>^0zmS״>*=)s7YB{Mhtz3b"!6*5u1b-j.LD}:E)j ֏W9Xrdt'2lfrZfDXEOKf->2~Ż쑥sPh%!z6{!6˫Ǵ]>0"j~BѤk8zlTgt}ZX$^>AaHvBv5iW d 3">HaeQC/HQ"YOvm>Ȩ7|&jT(e&`߼Bu\(.JyP&0Ig ~Vǝd4JWeo2q"`ꊺ.L:go)Z`c#ʉY>#o Y4h6567 qϲ%Ni׼TШ>#@]YoH+^sH0RUltzz^,ZFbFR%QvR<$,>dE#x|6?DiP6ϛfU˯R*}/R^BX}Ԏ^֡r >sW?A; k6k\Os'{o=VdvD{P,v[SgWer[Kj@5i:eubͰ&U kBvW1yN0 sQZYFV}qBwkUD'U V?n?Vuw"T6V7GA8\t-݄;?|}[4O]!ͳvj'cVjBNi;b]UivU=uAtIY2p2rH~n>.8ga|?&DgCjRj- nR+p.2`NTxDp t҈I˜F$Qj S A 8Vo Xk NE` abLqc0hRNiA 1*p<&B6Y,m1p_D_q? l+ڊV4ELCGvK}aamWI xb(Ri͂ڊUDru8XQKtxWxpƤRӌˮ20J}MR {ˊ?Wy'W|Uonbwb#X72yŝ/,tBTuoK𐦳ۛLU̓/۞E^jf+pi߀#˅-4SBKo]MoU07[w ]~xp䵰_b9NW偙~s gt;UR )|c OoQe>.IJaKgcsgtbž^NRLEM38 q(Kq{v4  cX 2)Jc8hwCC>[ #51x Ӛi9Փ#r8+`(mj& ߞ ҧxZE)Uྌ!;Z t[xu gS::akFXï0;+\Μ_K'XI%C„h /D[)3 mhH!#,`d0A:T EbϾP B^b"ŌGiOV*lpKPKPXDB*a9SM駑lKd_+W~o(@d.Կ]՟ہ8PL MkI\aߑZ#TH #hn"+$8eWH&LBZ i шD#I dH`*&7()wՔ&Ld}[oDC`4Jݕ Y]1,lXk:.X> +U[Q^f׈Y/w ־W9&H^M's\% uuF?wؿ2=z}&jwɫ׽?v1ֽ?.흼 ۛmZeU$SzJ\1RiFzpe<9"x%eVg -̅Vi0Tz3E."WZ-HIq|Mn7'Nuvn'M+X|CWڷRjĮgϴNs. d qin_UK6ϋj(qTJlܤ(JDUF:2,Eukt=\"kQ`xu`._8("TFfʂ)l_U͝t)Q edBGοIKgDt(t1hfh}]`k8}r?'4ltjWdN M(SU84ȗrqW' rV &Ȯ <w4fh\49L4\xWaFT|&R2ʸSw  l&\?OK4`7MCl6&Lx2q6=0 ;C5!q8pϪltmM6a[ElNqJ}…}*-9r% 97XUSy/ ^_)3-VzvDc$k%.6_& sx!nGX\ ӸEc#~5L&iJ.WWM O*Z\e_Xppu{Ufw/-~?DyO s")iZf\V&eh킝&i\^.iV[*څ1Mqb99bzj>xNl}Div| j9`.]$BvdL윚idB\ $]20E_m4>YMu9AD [yЇg|GY5!:O@UCEx|(!̔>1s)R{5_ 'V40i*-3A7* ٪ݐꝿ‹t5VX dpJlRPHPH1U*܀`PHťR/PHOBBƶpHM`066þqWi?]`=] Nw}f8'~}$#z^N\4N0X1X织ś5^YpOԻO{|F:l:w_G oV2ŝqz.Nlg80)z>J&Τ=ˑq-GcXCa4E$0'` 8e2 ~ǘoFcd@~{N|;`Mw F:\ѱQ56 O{tv{s"·ոYi/۞E^jf+n7* c"6ASxJp*-p!bbDDmV OZawźYBcj,!Sv~5v/wG(Ŗ@5ԚnŔ+ \4<߼\]ݺ*QSbYkc1uZJzewRk]hfJyϹݺҘB ZNȭtl- ~ ucP0NE$1ǭTሃՖ0 *Zm8 DV潊JWSC B~,TQ}m2rOӽ _>ǿ_XX)4?QL28]hCDRL؂&(㈠1#WYa"Uqp9ZMȧCT;<|JjX _oo6I:n>gSzZX0.vIe2TgʮW/cnn |Ez5Tqr3X]ٮܟSqRKug=T.QzSVsSv>n꫰nU1Q(c.wgݪ nuh;WJ:{G֍\zAVթz2QCZf*O4Ժա!\E϶n`jFFLlȮ8S4meUpq$ӿlq//>.`s?? ;W~_>z:YMxQbgkIypc=KC塁ywkstzۢcyӳCn\_Jf͢oƣf}^k&OBLT-yԞYWo&fPVh-ȊE[1%S56i. )-ReǴ\KӲy_|nHa 綗*ؽ!89 2S#Cq/6ro_s$xतFh1bt$&w|ِkC8Rmv;;W(ͯ ZlzFn5g)^/)SDK^:{VAYS:U )bTA 0,k%߭,P,kf%J5\{B`ӓ8wz)pACUi4#X] )C,Ʃ[ 3Z[I:RPdǒ Qa̴J.n?-"AC!AhgLkGLXKu2VለDGkLI(ptqũ!e!FE?Ps5 7U* sn|in)sǘ ^Z.9<(0=BHZ:k4WR.k@{s1@}孪`l%f{\PS*_CR0$ 2&ƊبzB" F)v }.r\ !g1~VvaCkgAqkNPZ(9t[a3)A7a}=נ+9Jpʎ#C}7Li[-T *xd.˽mFKn)(f:eWRvE×&V_7CvMgښ۸_aevfi*?:}ؙk\yIՍF;ˆ'V$%6/M E$[NXR<!KiSD\u ?ؔ(FʆA E:=˟4vH%겯#6l{w`~7m4>mø,e-]s0m7QղI0-x]'sY[ot,nFY0F 0F6"c: 垛"UQm4|qV iҦu)8\mv nv6ݙA@jScsȳK}A!/4uDL J̷olueuD>Ro.vHIJ۴T9&jjX-{r8r"1!FR`>qTJ9_M4Ʀ£[=]s@~#Ļ O0>"ӻWnQ6%W{ k,^ 6.y _B^ #M|aDnOrweqL *urq*رR]^-ٱ^6SNS% |S<ۉ܅)9.FF HI.S %M01$)i:!jW2C.BkU} FV!^I@;TpyMg9hQLTf"(²9xZĬNjtaR`ӢkX*/B,F-vEс"@bĻhZsGlAj_::nq52%iK0%^08s; )1QjwG Ck7z@C7z쀗FzoI@!SS}N?XEOPW?{go&=7EXɢ pw~ĵD-ںhF&S O9jrpOsal\/\qV[=;Dq.Q &<8(s3}X(Dɝr24\%wwP1$ez=eWֱoԋRYH,(w{p׃zDzԍVYHJ"S7zZjڎCr(C(!$% ZUϡ(9OJfCP+4to0#ةI&cz}g \4y sكN\-O?}o-oqW@՗7{)>g_~h/s Q.>y xs||]{60ApO ~e/d+\TU&T@F)3U$:'De x =nC7 MX&ɕeߟbpM>cpݯ917_N ̭m/6!ez5=c A=إ  ư&7|1L5!o)M}D/"",jDAGo50d4,(2Gϋ$W)2ϛR*ԹgzCiĔ\dCA-kDo5ю~A!Y;OYioǍh_бD>o8X(vy(s9%˶Rfh (I!sU)TEY\J)m U[-eke5kNcPp7} Jcj>XLӉL j6CƁҾ`:Зb\WJNHR@DUڮBhQ (Cp$bolm`|teŒFE>oZjz\SFX$pjS50(hpW-*u)XHy <)"PMy#q.](T* 1/0P(%)%*#1+#Zx^9dM|1&Zg0 `DbQQhi`hRI\ )ô&RHUP(',+psz*7R}ݩh4P#p/]DRc˪4Rq%Es*XEr 6݃Qy¨@eW#;Ěe&."Y"^)w*va)vMݕ8C97:0`/%7#l+"F1eB+VZrLnJYRЊ)ad*cMc/610J0ߊYW*Co\/wn>_.OOOʇ΢-|zSiW6P?K fa`u,յ _fj_Xr?_,Y}gd*ðͻ%T2FT:/Wﳾ+`>5%8JыvgAlIDwY<܌DbnW{5Ns񧪽@ BJJ_7TAziC0tUv_>v%DN-!E1Ҏ a]xKX֭fZXGn1DX>۱)?紜 z+?G{Oa!Dcl.)9֙dϻnN7Bp˩yxOB^FsϻFb11ox!w2ۺnڌ-rݱ)O3vcʹٕuss_|6 %VR$;!nvT i/@yJ@"q-I{jI2O=:,sZ$@(s-ɴ$g]OIkBg4@y]lT]d߯VU;{̭sXσ̿\fw ?^,%np0>ol?;_}U]Yjn\{BaCvqO'?|?^ `|ǞTҍbbGx?# I`{+ S6N5[Έ( B nΙ`taBZDDl阡`C-cnu"ag_%'ܔ.kneJFeF*a ×knbsRyߨ70o^?[RI%l]ܘDnfla5ֿ۾!;޸˝QldmkaZ˰uX9=vRtcn6'ԫXrbWoo7ua\FR;푔Nhhy"E!I >%8%%V x" jJyymnvw(aBh&B͸ÂKJ1MvhfNe=6zBHiޅm+f*O_r-|F+\.}iśm|qwRK6~;v UȻ=_<-fW_rT.]*^d+ΫmQM ㆢ.ew9Nf>~젚gZƑblødA@IT#IOlnMW|2%U*ŪQ]9t>g߶HѼխ^Ei)IiPcjk%," (m,ZJ}B[* 2s<Բ hŔ+Α775'X Xi"Զuֲ_[35o\ay8_=##A x9ḼİΏ]nŴc;g!޽ 0wv~AFOZ{9.V3 O46 YC]ݢRh3Ğ'WC_Ͷ0arKw^ɛcn3֢t UuG|Ϩ7*hٓo0ji+ByC*l8}BݬDS/an*AkkYcXZbTx|$Xs>ɻ3Mpmr@|abh4.gO¯{E eִe|E3Wn̡YJfDy@DqUxcq>.IC@xy>!ZoMBEfENTSBZΕT)ƨF2P$b.A3ł+0mJF9}d`v\5բ2p4.i6/i}.dY3M6ҝJL['JETJ\?ZFDt׽k ~t$=xK~*߮{AS?||@Gy}$x<{֓VkNr.|lՠMCT߬R-lf(LJr5rRPz%̒)=k)^rqR7T y#g,vNJ˶xR07T "e#g-I) vRJPTkHyK)aRJ];iRäS-2&v(c$fOS}sj-l-\I)vRr!oVHI RίDDfއYZrRBä8I5''!I)->jV\JU Uδ^[֗@sI21*1e^rzxWdE.E'N͝$Y٤makǶ?"|v>zx:̓vADlk8"]7=:HZ0ovn]8\^n]>,u/_FwqNM\|}e8qwsK9<&ߞg㉿ϩnKge\1KH߻릦7+ܽv6._O#Sj4w>pz_o> Wg3>}JtMn 8+Pv7)h^[=n2ޓEOLq-_; uN1 u^dI-_Yp]AK)M&WC{{5]w4uFWF #*K8o#jjmB 6_{GC|;(u<G.~۬TM͉OvSpew^A/?vl]U'~r(4gk˗hw6ɻDA9!(}Һr_Y7?Of7|:>FW_OX_Y|->pxs}z흔߀oi{9OzʦߠqSgh&3x纋yɽ+9l7#1i-eNu#(N %\xώ0v5{Bɭ-_YwgΓR[k{vn}M惿۪cf"O|ìGc~Jۧ@Ee~a|r> ~4J+"i>?4/믵Mݳ*–q4*OT &;%\C\e~dO~k{WUV`o-G<#r?e $LLiNf$8->#A8I"bbca+%YP:"eAi)*Q&^>T 6^p[~|M [M}Hw֭dk`<u( irlđ5^Go F0GH z5hl4Lx[uFS\ݺ}wpƹ1̻cKp 890ã\=s"PG:4Լ_?@wCUVTXyd'aÊ TrEV(oM~|)B*R6\JR=R,¤S-5ityK)aRJZ&jK;)=G) |: ) Q-h[JvyKΩ֤tR(vp8-T߬R-j,yKiбYI)U$4jys 񼥔0)>`$ ᢖ͘3hi&K[5,H`C WGq$iJʙR#ؠR#&ZN(]Gc*7JpԧϞ=@d3})=ظw&/J,ͨ=1ghFq)cE:1%!NLogC*!Q[1h;b)o'$Ab%Z^y,+zJiiCT9⨾:<1s6!唢 y{ L^oesVl 8С*E5"+ W4Zi;|Bm)&h0nqpL;1&H|)wVK>_UM+bU`>VV3fI8ɑ[gg%Yx>дށ-xMH'ޡxƣ=i.S˗O "r> Xt,7|wֻU!HT^8T&mmyPO%BsvgMbD6E@doiaB W=AFm YB5g5@tbG;_m3i dwkX;\w_iMe_\'%Z-.fl~1^Am(/]ٖϙx[dw}냋z29*pY&­S^)6 j*ْ#}pqo:ry!R@YtsՂGSb>wmO5za-V ) .|uXdֱVցT{[goskN]郮[qѰe!X'Rū]&q*޽y%B\#DqMW`WUMמX`*4*@QQs=ޜ!-CT[ <"%i\c|i>z\  @>~~{ ZNn'Ok*"MzK 1Tس ;D0c)L@]rq%P"jq(PO F9";YM'8A2´[sA,2"ei*4$EDS1B/&pHK:*W3QX?V-"fX,%Ӳ0 s& ,YC͊G=R-y.g`_y .w[yqNg*4-;15Ly|ְt#eKQI ]42Wدʂ)ӧ3FcLٞ<FI[P[WqR͘Dڒ)JZq 03L ]:RTYȳ؇D/9`N'<<^$EKu61JY}PI-Z@ R7!!O%74r_GK-'A(l)T'R MU`T1F%;$=0D#ԓ^0DV6"Zdb$&:J1_&s[BcQ!JR K]A h.v PxO7lyC)m[t;e ,1)GbKݎՂZ %*F!G&ZEpKMDS 1l;( Zn%Zl:[v3UC<ԟd75&.ѵ|YV&Vv:L3mtA0issJ$Xr$&Ĉ)<ф(3v2űtߠ{+CW{k#!ijIc^QI[yV3(|TD#YU*Ir=4q1єK֧S~0,q7%RB)P-gLLu8jBKy%%TSvsHK%ai NX{q!bފ&n&9kʥ;@.VOJhe-lSx ~sVK@ڃ E])R}KHXvV ca sir0Ei&G$*Ef''D^aAkhUoKΔtwU!D +{WX3C~JsZq@R4hG)ƒnHIyC'ŏguECy:ν/Ϣ!hAw7 dWֆNw+ԱNrNaT·%gTyRʹF>qӢ{ԚG=-,K:F;8Ec.[Ro_ nxnj/\0w?CeU [ށ_eY MKц#jzx# ނяH{#ӓ-PG$"GM6"HCq#'{9u'T~]൦a*Tp a*uPz2n>g+#)8 d)AH|Qr™{}̛R W8)n(2{DRmPV͹}[jLi_\Cj|O_,zz}߬.ķq+c9_֕l]_46?DQ˷N]@꿈Nr7Ūe7|Y"̓d?n6uPx|$H_ۻ;<S7{ag={v}( (wggY 7}ۄwCE[|L'!mbPc{m y&aS-0o,oW{.P6K}(3}V7w]̞ f?.QQ^]7ÉbF]Šo߬0ͧ_>3 i*'{"!xǐFG!y1(}Kc=,(PC_OD9֓A!%~ 7"? l<0+j Cށ#0(!7g=S}>w͛F\b'ҒjM/@Zц)AJi d yWT銫dq>_D>IY|dY_Wxu.ztY^u\UzʰpB-6,bP|@.,r^b^\LkD8htdf׵!Cg,q=S.bޝY$ eTr y&r]ǔ+eLqj,ωU) QH!-Oݻ UHNCvxkKwnEkKWR\pn,Q!@*B\IXg`Ф%HIGjTjT?FxjT"X X,:#<Ӥ*xeLnʼ(b<6@m<ZԢ~ҩQBS܌X%|63\0ozxh9?GpW&H F zBջM[AY!ʘݴϕDO.T~ˡn:!;iLl" }RsY)tG1ʯ'2Ȇӭ4<`~!gbg5HRkjFɾќ;nj+#KhVN iV3_0Uӵ⊏= a!/DclJSȻhӉFw/\2yZ)O6p) -V>6߶bJt[|ywB^lʳKs*K_^)Nu /owN4rD*8)GltT+j?g}-妞'V7M">}H c/~ꋱ:$m~{uD7벺۸ɪJU]ԻkzxyٿW@)CAh@C„ 7NrVRy:߼vXd,t9,jJn2ϴL(rD,IE+pTbJDI] iԢ8ϩJ׏Mʠ*ILfIQ lUVDB}9I#I0W1&Ͷ|kClڤ>X-?uOgR.l6u\Ђ~aܗC1NWu5DT+KHw"4hE,r:@+Mjo}v価[Aچ!*&#z)Dn70=T]9a?HKczJsQW>KBt:O'FHN#-aF+gl樣J+q.;ЙɊτ5~~)g#Z"/,%VB= =CDr Bs'rqoCGu1P_T`y_Tp{v_8Qhw;Ѳ2uÖzUf\*@h0n1PiJ|"nph?@QRM$~ CuYvHtp!~{6־±K3ccWoA!{5DoU9uxǐ+؍Z9WHUpc(3̦>耥ѵw.X}@i Ar`¯_0C W8.gc}׷Iݥ31NJLqݡ_o熑D" r}{ I8z{]ESj/gR/lyJ"@7?lc>ԥ!XUJ3Q$*Q3{oM(WrcxCLH?yjQ&V3(9 * Sq4 eDQ(%/̊+0J**HzTQ s”/wiw>r]_M &oB49Ehwr7ڎ@izFfʶ¶k;^[݅ TrKЭ°hn7hۇ1$Zߍ 'ؿ&]j`?z,* (ʲ+FAԜ<22UJihђҨ"F93 bѝM{]8?]E8c3/KtG{ X s%WzؙWu%X_tuAu8`evZl>:/6Ϫ0UU]g;'c}m"EcG;i+g ޢD7Zj^i[RJ&}E86~,?_~{y[o:j)xlC?N%a}*4-;sQ-ƿߒܤz7F&@}^+(e ,]YSb2ĦdmU6NͣuOL1$Z0yFM#5Nz]04VR,ۀ/>ɮ0Z ł^~;T BD *.%{-%~Pˑ1_lABR%aq;HAo)xJ )y*Cԍ%6&5$楁BC^pTK4@d"' ʚt%FKԢy5Ks3HAPBTB 2UT*ff$JlHA A/\g1">Kr4թi cjVJi ͵&B!L)&*ڪO(C 4yF(R!đlUVRnI$VβZ`N|A߲z;XV%Q*)'9 $~ew'B߇:jw8"S'H@{㨵N/GN=MS1nH2d~~if!jFq@=l'M߱Ri|i*bEsT'Ɠ 7$'qUl(s8uIM.  =~x^4կ4M'"NhHFBۺ!WXPGƾPYDJu#'Ӓj}5ۀ?^!oeQvp)wEm#_]/A.(r"JrKbWJoQ EI)&L{nO GjtW?m_f,[&_X&_X&_X&_\NcM1 h1b c L b!#!QJ&#G ls>x//鏢Ōxygo&Pd0Y6L^NG|٪'gY8.sŏS~a9{~e>h[VmץL RgB;wBTs0̬-l}`gsI65.SeJ!|PiAu:k$o57&aIÉúȷY+C8&P\8*fIe,XdTx$VAf$"@]'(D:D`, *wg|1%3oGd7<=~cxeOzپ `z>>{Oߜ^|wgxv?N'Oy=^x һo:ib ݘio6qj /J ϟ84MBuֿJĶVO@pnn=i*Wn5/_ҬA^;+3^K'k[EP <~o[pA)a^ |IvoFqk''iZ흼0KIp13=0w7C8w׃Yn;#sԇoq''Bث;vx]d]È &a2c]YOן29uǗ==}|ֵ:YY#Yac@z:>~KkqrIE J_??g`g8I ;k{I V'2sdoOįzoy9]V2 e?⪣|RnBWMs[ &+=M^8oo?F*j_4{gUNX#x?||?PACZ,TYπ䋥 k_N70`xa[9u)%|E }ػu="֘;ķ݌SM Ԡ6y{Hb7_A= !5 O) ob3znzՠ5ίg4$sF-Ȏ~N_zߎF/^f5?xy/GYn稖d{&_˶VmqؖlgSm5ֲouٸ9J&f{2Eqܕu5y#kՍ|S%ehu(8MVCةŲ5Hgm{R/7."K<<7Ej-.HjcW9V;4@ q;g1Lsxs;W@3 pfwj 鈄R.rVK=Vx*lehEi)O; I@H3Bo>FcdLdA>W(8C{I gU Ի>e%[x6G=C J셔&:.$DvҍJ ,yjkYHWaѢo.CNH?ELR]mjWBٶU.H +ǘ[kFڋn _xPZo[c *޶K4[^oupx;5^F}ȿ=Gq {⬾{r]MY u%j/Xx}[.h(XSDr򭮋[wʕou{9؃n/ָ* 1QSjo@]iw@O bTpwH mKR7\`E _3PSdfcCYN֢Fw{1y}?+_[6t<u ˑ3Ci 0[dRMzCv.__=P l_ؒa'RemTv{?gn} <ӄj"ixu5t7PM`4DB@R`K.+_JYSA}ىnf=s[86sCK>$)wzgOc9hˉG ay&BhN$;u1Sޘ%0Lb &f\qD(E$ǔ1 E2 ZfSir:!覓BET&:* M5%:q29ARb@/#kc!)$V#GhO¨B/$/n6VsY]Pz,0<+J7^bVk%i(z/0S7XZӷV9DNbdNg?hQ(KZ9U#y*mJϓ76W61P'}]kʌrEqDU U%vRzA}jt*^ѯ}Yۆ75az#ZaXY/ N2Kc~TdR]?/WQlI2 2"5gBc l9*e!(uz?~ t߲u˲m6~Җ'RuՂXHmˎ**W;.p_pHt\#u4H,֏cJY5fyskiؾODC&l)j\E&V^;.eڝX%) (v1g]ػ43Ɂ|Lo[5,c" [µc4\tJ(Gd7cΫjMa+"Ɠ`)Bb}(.>?J #:pPg |Fn("TT[=Hy]&3.b!RYokms5p UU\1QPKj,fr3B/z3zZm9PdN BĢz?6ZMow(r~}n橈;XGfۻ8j0a%:};ka]NҰ]9Xo|HTg_ewU`W9woÚM%ce#:_Lpx1*Ϳ~{SM<>IKȆ2$`'LfC>/R3TnĜM3!SbYmG#R5hv+!l΄hpt< ]"އ+v:CS4_>]h7&MmܬǤAZ[TKe|*leGa-Zq 839sy^2b;A<J"VѮ$ 0˿1=XvbUYdAU#dKa% 9mɴ, އvbhvU;9͋Βf,V%aP4֖Ӣ6l2=1%kf1CUIQcB4C8WðۏfgF_ӿ>k2:+f3D^xDU^˶qͿ}&)t%Ǽ@v/?g-;̦Tg"J92$HFx,bdZ" R՚frѩ\{$E^+6߄]*- om*zRWW3 C4PXY\r^t0euޞΫ9xo6>.TEeصlZEeQ 1Ň Q]Do_nGZΤ4h܅-DY$CO͜JL5)#Q3pʾJy9]JV"eW&vyhkU Вmuf/J,)>j*cc ^[癘{L*o{6 [vQ^&,DYV]Ee峗֧0_[Ep934$7QS ծT Z9[̅q:׶VN[&ZonVNJX%(+:I@*dJZtuaҲn|AIPEXDa%u^PQq@{3h.xV1K:,o7b3ɭOEDuF\ۛSg)G6YZ)pVrMziҰ)x8*AAs{<[S# VgfŚb]1ad;x?4! sMW$љ,Q`@QԂSedNJF2ݪ2u>^Zv\95\mw@Vk DT5\vjGuadX\{-PR-|_=n*,_* r}BP9{=uL1wuY݊:e\}4KNM>&(:Sp\fb"63:rf{tu* g|*Q"Vg_g6S޻e͵}\͞Tϋr{%,xyd/:@tu*=:BwzG`y pe|ŵZ^^c^BEOlV1yȾMrHEɬ/kݯrkb3)U0oc{8 5ie5_.-^+y9X?ӈZSCvHoLZ;} e`LcI^_s^` {Rmˢ6YūO#PJi+jQ4BBj(UЗᨎ͔lqGfU6[-mkoc(*:'Bi\ Nަ[/?܉?p{MPnKNZDѦj*FdO Þr&1}$ڑpza1oj~8S"H $;PO;N81K@ك)7s-dJ x<K>ՠ^짙 w4~ C_׏_+Ew⏾aa&&ve2f顯ǯ&,bSfiwM]Հ7K_?~%f=Ka,Y:_yیJ_T%9L^Y+Ux&VCV`NBڏ ],'vMiTh^5 wV9 d}zVЊ@h:׽aA$xٹxei{HK.GKN-%نם%'fLo9ueKS JDˎ>~8~ҽˆ~fk+'WތXkV#Y*|Jht/x?|.ղa^ ;2GX̡Vv!da+jONm*Bp֪ s)w*&'2dc̯'rބBQ Ԯ_~!D/4jrA^BƴkՒ?\N :hCQ8QLJ1|E |  )}!hSb{Kź@.?M9^zk;s2~\r\5 _ܘ;lPq GZ/e $K͗Y 5vx \ᐭOgŘx ejSkaC*H3hb.K%:7"Ď_ޏy%ql=gN櫬|َdI7oە0g"ˌBù9 G=P$#UMwGoPXxQֻEuH+ .Dx1 <璡(q9JEzO<`]GvpvcJvڢUŅlffD>nByqk/huaX6!Ei{R^qǒdka$(O"Lu@ ~ς`>LR7 r(k, Y0\ <^.ҿ*]?m $S0ȝbB PH;2)=IU:߲Te٬ܝ5H/RݱJU|ݷ\cAŬq_P;R[eAsi:;<]YG+5<;z•z|-''QkPލ4@X5"70$~?O>MmQ@k0<Kn8NH@}\I#wb}aVhnivN˚[s ?V):hN_Y^Q4_hm/zC>+C>xэKyLh+9;;Yg wU)ӷ1ZtI-T`<ꒅhMns}'*k`[mSs>J &uo(A{a5ےVzz ƿ]!ȸڸ8׵~3gQ;xIpx`ˎe_ u1խոY4 ^a\3L+S3 Ex)|(ɩk젽%hF[cc"0/ {C472MWXM6Sn局Lnx_n2"m$Wa#EtsD㥾YK ձ;jb 0Rnv`&fQedPyY _\PHsxgMGedz̀k3 槃 Z=,68cMb'c#c!{B6nz q^)Zo7Dflw!:1 +CS/JNm C V_' X_( u:\0cRh0(ع 6Cu-14rV[Fͥ$\~,( Dq5N LRMN/y0'Z> \ gBTkp|  ?!EP9>5๏)Kf)!rԲH"*GM!_ya&z yM0jZ[R]zb2 Z#4 KCBʊH1|6h"ӗ{+%29ɳx)~'Lܝ켹uZ/3?ޤRoRWY}HY;yd9z|2zGnB,ΥA]R?{S 2U&1!ӟ秩1_$rNMiAAjk{[ݼ{oyZA;YKdt4餉h|zY_< vTN}x鷃iZ緃_ޝޟ纆&A䔄m RjMY ҿd{g o%4`1ڧu}lpOϒo9mS|y:e.L?yEԔfmՅEN^a}=W qes5|zJ sS7=^+=b.)Tw\ 0k|%7^}6t}Z@YM?Ogy@Mfӳ/a#gtaS$*fO'h;l!M3 L%+K h#F[k27eaTFY|siř"xUZ3VU1},.eu5ɾ,XO;b9[e%T`4̝Tm]Al-_p]z]'|[M^]܉;VLtѠsK{BIIZI uՓ\z:*Sb b81$(Y]ۉp.}=}4)x{S4 C ]^ !bQ rvB҅$TnydHFu)%Z(; 1]2{?2DMYĻԋD"Qd_M8ɫ{_=M+N Z'Uߝrޘ wmXY"y_Ȏ,4vcvGe- #ZHW.I%*XhVr5w=z{urc=7\BN8  Cϓl8ZNi蔐[b/P6骦";]Q#%f27ڄ6PѫŃ-Įx7ۈpbmIJϓd̴!fN԰h,N( ȵY>/~Hڑ4mltmY(^(§ebeM)ӳ)e05Iz+ f@"&x.a'-4t~q;>S_Y ë\i0>qj$s5i;\;J5|9g7(J$g3 }iᅂC7yLYR;UJ4D.\iϚ.\%DlԞ:gzڍ*}%/j_T4B_Fy}Amn]lc_LftFNc\r)tvɏ!iEih7^U }Zjr$Ao觳+|x x+9v%>?=*UYpCjx'u$?=,:5y kDpwlXp{ -% z/ω$iJl'UΧ/_iqpi_ͼOΎ?@Ǣ I6=•b6LSJE:w_a/i~9|8Nu>!g|>;;з_n:]Ns#1^΍h58 ۟5j˚{]󫑖+nc" C$*AR6 H?E`8ㆬ 11G]$y*wc؀~9~b q]T5_}^ZJk e)hN{.캲וhaɑ59ZlV\̵t_Bl:ϐgkpliDJ %*hE EG՘ւ=!b9qBܯ/W }EbK @-KD KrHZX[Fg^!x{/%}<JφG!b󜘪W+ښ48,q΃S4K1qbυEgwa nҿBEOEqB,u%ӢUC/t ) "q("b)54W]*rƢ0Jⅻ/>4IQ.0Z1 baAB" E:&\l=U:V$\;!D[8!W+Bse]{BɄ{'7KXtsm2T/n7>!5bHre.L E0)/+7tXg\_Yg@l=nMFTJr#Z;?u"Mrw<η/ $(f $W,hYvnǃ=dA.*h .h|ֲv6}]5fzwBF z>kV-u''=V=akA|=4 ͭha{%|v@fcHg悢kg(Ne4Ƣv|QɴY&-7Kڳ֍(irt=e.ί K$uxk{ C?njծ˷)lrq~<Ò7lr 9g@lew;AgǓQ*βWHDo. fXg0PZӃV|~= =8e Zh5?{;}7rޢ%YE$~XzO7t4Sqm8zG.{(>fPc\)PKjykBl`vmw(ic~^Z hE)-hﵑh=ӹ -d(UK aNr-dxG5 ."-&e\*kwW`V~UWr'׃kt!Žt|w>|ѯgiXY_$!/\Dd?tv3DSnĈN3h[2ڭ y""S)l'+Cb#:hݎ'+9΄.-+QE>:'w.^FK9y 8O;ʧi?ŵ0Hek\56snPxT!8=W{[# CAHl?ݽvJBZryKrWm( ѦkrlUG6c06&t7 Cmh2~i$-,S$2+A@ /[ v{CaOzDzs *kt9}|nH=k uf)LimG$@ :P6323g4H_BY+jϯ;"HnƟ9\G"a%T5H,)R&2w"v|96[ZySjVÏΒR%Skwl{Ҏ2G((/Li_.`W$-RslE,43[YTY_u)̞moC>~xm6v" %>bge/~:.;v^劷\)PPewY, ֻ5Ucg1B#[6\(y3aֲ?v\ŨuOEF #c;Y_2?$#~ZX0;jRRyj{=rYS]ov޾+"iؿڈ2rI2^R"Oٌڤ`8{"׫BiJGXm&-؜*dقn[hL2Х"ʫyhjl=#! r<+ C415\!\AB Z)/(%{Z Y3^8 %><Cbɭ3r CubeAP(~\Z R8҈EL k{VjɄQjM!Д<ֺH PrkMX0EU:=9,˻VTcw\;0)*lpig"}["O%FCy?E2_qq} 0$ۺ2|==| 3?tr:\ pEJn*4y*94T4ApRzZamAH h@l{޷OtG{v֫4UAGIdۼ="QGPK UUSpQ'ӻ8-kKY vJݷ=R8Ehɼ]>B!6td5-tZ >\BWWh-t#*j9/-t\-t37#u4m H->ݠNAPн:,݁5}߹!<#0<{g "7P8;Y^F(y`w3W Birj VO{krrz᫖D|JFCܩ7WmHm,֪%o\ jBlݡmIsS1c瘟VEv~ oQ^֪#vuv9K*YuC䓟XY$!/\DSdr}z[Z[*1&ޗ1^k햞ڐ.I2E{0v -I}Fv]ZWDj6$䅋LH2C. q燆[ap jtmyJ=9*7Z!I 2jn*e7 -`9Ɂq8nRTuv3$[*Qg4 2Ryx<9oAB^.S _n:ObGe{q[usZ]UwM!u OqWQD' &@+;\b檫WVqUUűEs2Z)kQZP@5j&5afFWQx+;.]wFMH#"aQCi~h=t}77XWS: +M_w4]_H^%/΋WF KZ@* T>f| ThLWTC ^?{϶ܶ쯰g7=M]qj03LlO@I ˀfʉ#mz{ȍf%E[T%-hN"0v51|v{+ z5 OVз?ƓIH;ڰلgRu_<-6yp$ y1ZEFx콟E!mB9(9`~|w?]To~;?Ja>i͗334ߍ}:p%/?ـߢ/{zA;b௿^ |{=!K49uIPB=E]9ex/ORŕ4$-ւ䨑F %)Fc5o8C{4& VI+l*⤔д@>8)-,-I)ǁk>tԣ*Rqg)=5)(J-.RAPP+Y8K)Jiԭk\طZR!j+RRzjRJqpvrMQH)qRC=EqH)< VRzR55.0qR Qj?e)'FJsC$AxN߽zUG;cR<ϲJ*Vjl˷ m7 YKUc^1]R[GQ 77n"=%6YWGBqݐ@L'0wQ<Ո\+ -Hy_"ӈ vEMxԄ>wD(h৳ePp)9}r~Ph%1=P EE[li@H] 8eP44oe2h4%xQt٣ fhgM6R}x tU df7ͻOn/CG}_޼^ܟ* :r6.ˈ )B1X_~6>l:|(JUQlqb @%Hhe8^*ՎY܏PA9's?t!c3v#q>I]z>5[U $u!6?nJ1HK u4cR8lxǃT]iY=ijApn:M$cZ?%OQ;ӺJkڎi\@w!>+]`#G&\(]3 GP 4Ru5ӘȊҘk2C1^d^ N2 9!cL#30BV#+H<} SoH3B<7J&aB:3l" p&ՈdL0҄ N=#tCd)}D^)C!&Z,*\Pq͚4l ͫ?TD)CңKʡ&s\R,2z8$(EFOs%EӖR㤔 қ>tԣ*ԒIuR}fpE",~x.q\:):NJsbg)=m)%2NJ,R!^6QU9oYf6 #'Ymn "HwBkaنXhFWFgn+mΤ+I?~|^ae:$ÝED$n/Fr{lĸsy9"1M#n⎡٣h<r]caMHu7v}=Ƕxs]FslqKUsl;=k{OMs**]1׎Wi~8w2rCEh$iGcA9<5N lºQ X * n?C BM. -v1#{/҈~w+~5=D)(8Nw() RɆwD#jǴ7ʇNbܹDgnu>q]҃kg~q? ^bE0:y<^>*XQX7(WID^ V@V〤@\ /x&6DuRD:1xjLB Dꑀk}rR5ИU"#ydG Qj)yiGՐR-5ZFbAApg9=Xqvt} ɕ>=#$(%LuM8+gyp9u؃1* 9x"J#:xsD{:=%"$TRC0';עGMʀ>l&Nc퍜PTaM.L:jNJ7aeTKAz읮 (gYg5\;s2jQ3tVKbzLlTl <~>DL.-(jE*Ib< S'\0)nmuNGX훹/L;N4Sx=q QU:4:d/^5%f&+}4V>UjrPKIJ'gÅYT\gSo+ЈYaClCŐH,JuMbdIeqD}g#Mљ*^9y%aEO98ZnI\EP7BTWwQtۙД6AJ!)))w3B8E5RI Dž6VÿibBuR,XRvw_[ +Fv2r( µT<`s]Lj 8Ԅ#VÎ?F*&VYTBizl:T뎯okuI/dwDew=m%jsT Io=QsGӠjH'7v2@ ;NC.^+v" q{[!f yW4 E~;ΡVZs~I*BJB[kXb@Rwcwʯ1?n:ªX)BRFLZWe'w~WX'* p~4-*RQc,/~˜w /RK͸N|J|¨& t.E&u"U>,R2KTŠ "%Q;Pӛ`,jy=[#ÇjsgP\Ař pYEg Iy߳ƒ|M8"$$Gǣ:G};Ed)$-.Fne y Pi , (26ǯ:4 ~6Y~V32!-ٗl<9w$siW gS2&l$Rmm*eBȻ{.(KI 6GU/[“%RNQES׋AR.{wL>7kEu`>Bk%UBe p$,Ȏ;K"bRN-pC"G^lz jqv/Nڽ%QFVC  8zE !rq~HW;ʛ =~m,>[K)ff<_>'Y]Nv6N/^46 ]kj `BH4,$Zbic/6/}doC(J0r…TdL^#Z~9&Zǘċ*Me`65,6b&&MC_wmmJ!}F_Ї/j+eY$E{9r 7CΝMTw7)YcHfe lSytq~؄jEw(1s-uPf؈52I3 kuqyo~{#ETZj HUDJj9Ҡ}LV<_]nb& u*1M@{$N q_gK%vCl`eLT)ZHYGdEc~r(L,S@gvݡ o<3&XZ-4WhTMeɢXShfsX)(KYќduhWa.F%P֯-W9+'#l7uVB Y0}xcs빃z'%H@=/0@TBKl[A%QfJ7)ƥlm_G8w^fx#s룯KQ[׻ݐW>UQb#FoK3tY̖6~ j9ue+ԦUj)t>V+~5K<]ͭkuҽ}b5p`n1l"-.^dX#1Zҁlbq,URZez@wfhQi-E#PsZ2_w\*# =Nٳk(1"tj ȦdݓR6/vY)o[ ~pb=.4WICWt*Q^zKQ_82gp,<)ƓpJ=jg #"E Ez>mU3p^[_!.f/^\Uŭ|,1&M$uWcTb# [M.* _#+1ПimYNj(Sс p~tc2D?㈩xxD<ȏuW-bWa"sy6Ù&AOQSqo=_tQJ_5nLdK:ͻu䳃vlT\.[kɋ#qyx74HEߑj>(&G I,t2.~55!3s{5D?l+f~8jslS*5ar.mףJ4:?wd& ORrĀΌi0))-)#{%` p;ޔScyzMG`5q}SeKu2DB);'b# `ѠtM|; MclŊq,0}w"k1fa9Dt`Ov:콴t"NLB?k˼'!c)J!ZC Y0g։;3}]՞<*|<,\󪵒񼓍EigksE8j⩔qz4ɾ^nn}~RJd\l>j*%P6u[8xh@)ƓDzUPUMhd@\0-)AA7njQ(y4DlʹPSOsh5)a|n{N~$Z) ͜K{ᕸЇ6l-OEM-zRPЉ}8x݄bO\LAkB[/^,X~UoSko`dିf#.wd_Dԝs ԓU=kD-z{f ~e.cIޓqdW}Ual`؈̇I@)1HEjRb"nA$Qz;}FI { p@0yUZ/ )Fn >-@_t9M] ~R<\5lqF@5*C;w=p z{݋tV:Q %rol'3BcbYz#?/V)[ MU)ikrF2p s)n9" S3prބ<| /8 J&ʂ- ;D:~LJ;0{HK|Naz|n{B/ᘵ~>uB;DjRfH1O"Ϟ~|WLK$#`x:u'GN23䙣ZU# gƃ+wv *rff) W2EUSSZxcYa?#oPހbp=9h0%89\PA>BJ6naЃ/dk>|P5xuN0!1B$&yC!!l-6#)ǫ5M[we.l1iOSC7:8 %%GKbF)98 .xŰ"Gh#sQ #&8l Di[u>٩Haɀ^%k G伣ȁIJ`u' n,b?$. "#g< ,e݁[i5L%?%c)gVPRάիd4xvbERߥeg?E >*ތbbh0T>VUKKgK3ۼ`hM8.78S0(d2A b9p`Ԙ`dqcڏ}hBD1Z1G0'y:8Dͻqڱ|u#q,hXdY",($%q^$|n_C%Q 0R*R!G7|0t;?yiٱa ܽ=)CyS~hN!)l`0G}N^::J܃&8\ܷ,#@4ϳ`ͫ> 2ʇ ]]4Wfhx9Y&/qO_^ڔOE2/*yiA~=Y$wY̲ ]- 6'h^^2N `|ȅ\$.$2qhe >d MU~{uXʉ:励ŤKd>닯o @4jOIYNI*j[YS6IZ[K=1aAr[˷#Fwk!jT{KN߆[œ8ކz! Rt]_tJdQD&-y[(LmF F:} ( ĝx{1\({,^̮;e}fŃ/G^#gds[ 19ݾ,KdiR@~\T G9$ `;,麥_s78þM f 2rVr@4+ >7Wiek?\&L0F1yc=`H%ʱ;ʭ+A^b#NkT<\, 4y㜔y4g^6QtVb)9:м-Hs>h{E/:9r6pƹ&oi2^?/4k̍I7满*ҪJT8s /O{fQ"цzǖZPi4 *@g+ֲ +LJXUx{VZo$%k0TA[m\ EckkY~.㔒x6g|t|l<(opWlb kGbUwnӽs@pWT+LmS`y Ȝvj9SAbqa)xh֢54*Z+6Z>TMS-?Uzxt O5!Юn+ $iZx^κ5pjZ35J1V#Gs}iTE鹻M˟^;U ?W/PSF\{ĨYW5*5*jDBb0~)VbO!΃I>7a1*>LT.Ii9aȶ2O RLqF z :6M~op2/@V8r?|4xFtgYs J\!ܘ]&"݂k1ֈ1LEe b2 +? D 6"E*ͽ3@e4s>pi$S":N\-F10 ɼx ?\Фі<{fsj.FLr@+eUìjPcUq-Tcccp<0+=I&s8\1&>hUn~7FgU>zrګ?BLUXcdNJ@qݧ]Eh0@c[nɤ ydevHZA#6BkT qqY&܉坳OTd_Q}fI˭НbaL6N&99`Z ׵/SR,KKmIT'ւn(,:oLj\qg{rBi!}Rv>ibN`Rʮ#Zd25 ̒t J3Ꙅ̬ej 2jrZzA!'_vblcx=c6ݎԍdּr}N6R'.>:y4),fߞV 'x8Vw5Ď_XÔå r\TJRuh8u, ǙIѻZ j[Y|vl0p+^۟'Ru!Ʋ 5}Gˣꤡ`1wT1mHșh#놞ukA4}GvDNV`֭yDֆ6)Cyw ObԉNwԱnJ`֭1֭ 9sYR|oZ\faIu&kvִIdYdYD˩Xi1ϥb16n#Np4ln>*&; ?L  :1 g) "|`C`~=A ](?H(P{ NofF> LNe ž"H pnIV|Ŭ)] ]0d~\!K1vm ^* W.+_jFp_ri/N rifDDRe:f 3jͩXtiԋBf5]YҧS ETů,K!kmo1yQ;I^cҌ#yC4A_>UEѾ0AWPm Jȕ$Dkou(p|<S@ <kEQw/,gL?Zz;frd"9τbnC2kϦ3͙eD:f dZYOTmM'M2Ɣi˯l$' _kQĢuABSQ }%Ͷ"57ժ?W!͈+nlH$F.55nĚa>G'p'rv2 ̧߇6}Jm8_\otaqU{oGl6^o#%ҺT`ECgTBƔ-c MɔV4wƫ%M:jIS*莥)X0EPq Mٝ,CciJ,p =iU*d!aWR!Q92o$[UD=t$@ 7r WMe>[SoȦo'h;^ñkP$Fm-u,Ҋr5CFkѪ!{ kʛjvyjv&[ K2]]?n.*jұG/e_u>m>xbvh#nUHĐfrp+7Sj-?-e\rh}潔M%L5VʲdT-z]xf= Ԏβ#Zh Ib=L292p<++ܛ & t 9+23J!k#~nTr5V 3J+u@ލ6&Kz@l@0-y>ӲWs7+e60Ʋr˹w<.gVMέy z4aRJ_H lVaivYGF2+^u Wkۗ*G"(<>H<ӖL g.8G|N4x"2hܨxt)lbHLUQs@l53}@_$ep6b,}Ed*f}Ŋ٨gfwe4{?q1/>.Db6wS|> [+eA_Tn:~J,s~l܍OYңSѪowA u9ص~9R.GZw1=uWYG4} H & Xe6ȅaeFfi,%wӠ"FPwWb4Rj@Wu}5g7FXh)߅|Xj0ro'CAQK0bW4.ThJV|%zAՉ>Y}ܭA5 \je·h4]e߫,o)[TDzMz>tj՞ dHv>%gwR7iJ-0T,Je@C6eYJJKOe-$R@  ˵dJ]\r١סck4Rvޖl˅:mjm"6ƞj:=ez_d\gYPEЙ+ łuf,Q]H R&Ϛ3Ɍ8u A/(عȤx`?j+?? 6VK+p;c")E@"[N$.H妑Ro)mfNSn@  ձd3˄%6$VڧW ՝gsz\XkFeF o%Έ}.di8zǟ (h,YN\MMjn ]mˏRmζVHjD 13gW-TƗI5YysX/|iT=4k0 .8PP i€ev(H,lIiH'8[RїCKdUٲ=J՘俉p%NJ@f>fn<#h$X:ei%pQC_3=yէf> ttH&W#~_ kxZg̢dG6YX_ҁs[b=/b[ГJl;áڀ|]&"n~Yբ3i7y|ӡ6 iWsn#G^AY7;PKA P\9!)dw͕[2 ww0]Lz0FKdiv<^wx/[%MX%~sWwtScޅOCܺ&?4lչq g-ptoEkZmUO\7O9eG]e*G0kV^?wp.OTR#vqg͍pT_hoKcl 8T*uu'~}ւ>CZUb~7Moyb܋g[Y-앵ߦ.0?Mߓrw7yd|QBCc bo-v';3]$ʏټ3SV>7g6j^gTNFEߡ#9Ry~T,D)^y$ELq;xjpǰnM1HcݎCNfݚrLnmHșh:]ݟH^1X J,TlC;4J=]d#jQ<)gStDDJ獬.XsaS=al }bbLcTU/xbW|sH}d[++=?S*5"earqv88 -֗ӫ|4b\QaF[3WPY?Y]o)̞_\#)?Nkmd'E!+ڪ|/eaoΛcqR"{O5/d58.[c MC6~lEP3^h]:ӡFY5ԃ%6V޹ok9cܗc%MuZGKBis(rr ,əN .i㣍"1<.xxO6E|˹&,+*F'Ub:Fp! GxC[ҊuT9iUZQꐢ51&+|֖i"98f4Je2wP֩P0w\є*(SF`1yN?!*=N_ %V)r=a0Ȏ\$YrOظ҅}-;w){W B Q՘oL% 8Dr)1*{zy_+I{Ң4C1V!7&QI Q9UuT彜-e} [u }GLh%"sZ >̕ȣq+4|ƯZfƪd(GfA~bO9gUNk> fx&,%⍗HM1۸ ^d)#'O$0Zo_;;s;@S](?6 ni¡f[6( T:]~d0J 3ijN1򩠺YWNtxVwUJ,iCe.2R2Hꬨ7!'F&>D&4' "`+Tlt+ >g@&#P꽊 "J^+FcR}#u,48L S~Ls}=ER޲o nIMTc"۴+ݏk=k'J-ѝuwg݄lg*j֝ c WZg5h .|?zD .'tf9NFY ƹ,t"m9؅ CJ.o&hYnV4=uLEۛR tsɤX=Z+&N]4w:b?n͇ZVzV6sZ߄UK M5jү[ڤUغ=$ksf71V_czvXݮ=_hsFߪŵs2hQ7~f'M8V}K&7,poanԂ7+Sj AnE˃lاUbۈ~Rx3Uyz93۬+-\[{-UdC2U-jH)p)pG!+DxGuate EYt?;-EƪRꌖg:""0h]xgr(KΆz/WV0:&=`A}2ED,B_O7zӦXSL7 fl6ưM r.ǗuGK\X|ٳ/6w7)xT_~^7T(3b7W=՗n׏Qz(4YwJߣrj0TQz(UJ ({\cG0J9K˺b{|EG(e}S}NFu(w ^T:6T+&9aTVz2 '{RYssDATUz|xvgTUzj TQz(zf]}S}N6p\ԡc8@)u(m(=l YR!K\jJCiC5#J "wiNvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005633357715137164607017732 0ustar rootrootJan 30 15:55:49 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 15:55:49 crc restorecon[4675]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:49 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 15:55:50 crc restorecon[4675]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 15:55:52 crc kubenswrapper[4740]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.793743 4740 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811573 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811641 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811650 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811659 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811666 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811675 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811683 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811693 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811700 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811706 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811715 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811722 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811729 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811736 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811743 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811750 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811790 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811799 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811806 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811813 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811820 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811826 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811833 4740 feature_gate.go:330] unrecognized feature gate: Example Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811839 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811845 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811852 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811858 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811865 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811875 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811884 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811893 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811901 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811909 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811916 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811922 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811929 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811935 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811941 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811951 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811962 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811969 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811977 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811985 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.811992 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812000 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812007 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812013 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812021 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812032 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812043 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812050 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812057 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812064 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812070 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812076 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812081 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812088 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812094 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812100 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812106 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812114 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812121 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812127 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812134 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812140 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812147 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812153 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812159 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812164 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812175 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.812183 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812404 4740 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812425 4740 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812439 4740 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812450 4740 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812460 4740 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812468 4740 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812480 4740 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812489 4740 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812497 4740 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812505 4740 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812513 4740 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812522 4740 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812531 4740 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812538 4740 flags.go:64] FLAG: --cgroup-root="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812545 4740 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812552 4740 flags.go:64] FLAG: --client-ca-file="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812559 4740 flags.go:64] FLAG: --cloud-config="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812568 4740 flags.go:64] FLAG: --cloud-provider="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812576 4740 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812587 4740 flags.go:64] FLAG: --cluster-domain="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812595 4740 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812603 4740 flags.go:64] FLAG: --config-dir="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812611 4740 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812620 4740 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812630 4740 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812638 4740 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812645 4740 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812654 4740 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812662 4740 flags.go:64] FLAG: --contention-profiling="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812670 4740 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812678 4740 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812686 4740 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812696 4740 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812708 4740 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812716 4740 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812724 4740 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812732 4740 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812740 4740 flags.go:64] FLAG: --enable-server="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812748 4740 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812760 4740 flags.go:64] FLAG: --event-burst="100" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812768 4740 flags.go:64] FLAG: --event-qps="50" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812776 4740 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812784 4740 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812792 4740 flags.go:64] FLAG: --eviction-hard="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812802 4740 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812809 4740 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812817 4740 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812827 4740 flags.go:64] FLAG: --eviction-soft="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812844 4740 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812852 4740 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812859 4740 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812867 4740 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812875 4740 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812883 4740 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812891 4740 flags.go:64] FLAG: --feature-gates="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812901 4740 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812909 4740 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812918 4740 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812926 4740 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812935 4740 flags.go:64] FLAG: --healthz-port="10248" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812942 4740 flags.go:64] FLAG: --help="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812950 4740 flags.go:64] FLAG: --hostname-override="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812957 4740 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812965 4740 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812972 4740 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812978 4740 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812984 4740 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812991 4740 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.812999 4740 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813004 4740 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813011 4740 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813018 4740 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813025 4740 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813032 4740 flags.go:64] FLAG: --kube-reserved="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813038 4740 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813045 4740 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813051 4740 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813057 4740 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813063 4740 flags.go:64] FLAG: --lock-file="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813068 4740 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813080 4740 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813087 4740 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813099 4740 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813109 4740 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813117 4740 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813125 4740 flags.go:64] FLAG: --logging-format="text" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813132 4740 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813140 4740 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813148 4740 flags.go:64] FLAG: --manifest-url="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813156 4740 flags.go:64] FLAG: --manifest-url-header="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813168 4740 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813177 4740 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813187 4740 flags.go:64] FLAG: --max-pods="110" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813194 4740 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813202 4740 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813210 4740 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813219 4740 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813227 4740 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813235 4740 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813243 4740 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813260 4740 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813268 4740 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813276 4740 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813283 4740 flags.go:64] FLAG: --pod-cidr="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813290 4740 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813304 4740 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813311 4740 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813319 4740 flags.go:64] FLAG: --pods-per-core="0" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813327 4740 flags.go:64] FLAG: --port="10250" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813334 4740 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813340 4740 flags.go:64] FLAG: --provider-id="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813373 4740 flags.go:64] FLAG: --qos-reserved="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813383 4740 flags.go:64] FLAG: --read-only-port="10255" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813389 4740 flags.go:64] FLAG: --register-node="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813396 4740 flags.go:64] FLAG: --register-schedulable="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813402 4740 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813413 4740 flags.go:64] FLAG: --registry-burst="10" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813419 4740 flags.go:64] FLAG: --registry-qps="5" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813425 4740 flags.go:64] FLAG: --reserved-cpus="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813432 4740 flags.go:64] FLAG: --reserved-memory="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813440 4740 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813447 4740 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813453 4740 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813459 4740 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813465 4740 flags.go:64] FLAG: --runonce="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813471 4740 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813478 4740 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813484 4740 flags.go:64] FLAG: --seccomp-default="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813490 4740 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813497 4740 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813503 4740 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813510 4740 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813516 4740 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813522 4740 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813528 4740 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813534 4740 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813540 4740 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813546 4740 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813553 4740 flags.go:64] FLAG: --system-cgroups="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813559 4740 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813569 4740 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813575 4740 flags.go:64] FLAG: --tls-cert-file="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813582 4740 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813590 4740 flags.go:64] FLAG: --tls-min-version="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813601 4740 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813607 4740 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813613 4740 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813621 4740 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813629 4740 flags.go:64] FLAG: --v="2" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813640 4740 flags.go:64] FLAG: --version="false" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813650 4740 flags.go:64] FLAG: --vmodule="" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813659 4740 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.813668 4740 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813852 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813864 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813873 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813880 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813887 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813894 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813901 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813907 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813920 4740 feature_gate.go:330] unrecognized feature gate: Example Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813927 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813935 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813942 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813949 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813956 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813961 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813966 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813973 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813980 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813986 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813991 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.813997 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814002 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814007 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814014 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814019 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814025 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814030 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814036 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814041 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814047 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814052 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814058 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814063 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814068 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814074 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814079 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814085 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814092 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814100 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814105 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814113 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814119 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814124 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814131 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814137 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814150 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814160 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814168 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814175 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814182 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814189 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814196 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814203 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814211 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814245 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814256 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814263 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814271 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814278 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814284 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814291 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814297 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814305 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814312 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814319 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814325 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814335 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814344 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814375 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814383 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.814389 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.814412 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.847857 4740 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.847910 4740 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848004 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848016 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848021 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848026 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848032 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848038 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848042 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848047 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848051 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848055 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848060 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848065 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848070 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848075 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848080 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848084 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848088 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848092 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848095 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848099 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848103 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848107 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848111 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848115 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848119 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848123 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848127 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848131 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848135 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848141 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848145 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848150 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848154 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848158 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848162 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848167 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848172 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848176 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848180 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848184 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848188 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848192 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848196 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848200 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848205 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848209 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848213 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848218 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848223 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848227 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848232 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848238 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848242 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848247 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848251 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848255 4740 feature_gate.go:330] unrecognized feature gate: Example Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848260 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848266 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848271 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848276 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848281 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848287 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848292 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848297 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848301 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848307 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848311 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848316 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848322 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848327 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848331 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.848339 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848503 4740 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848512 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848518 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848523 4740 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848528 4740 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848532 4740 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848537 4740 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848542 4740 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848547 4740 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848551 4740 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848556 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848560 4740 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848565 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848568 4740 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848572 4740 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848576 4740 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848580 4740 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848585 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848588 4740 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848592 4740 feature_gate.go:330] unrecognized feature gate: Example Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848597 4740 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848601 4740 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848606 4740 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848609 4740 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848613 4740 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848618 4740 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848621 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848625 4740 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848631 4740 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848638 4740 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848643 4740 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848648 4740 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848652 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848657 4740 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848661 4740 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848666 4740 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848671 4740 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848675 4740 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848680 4740 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848685 4740 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848689 4740 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848693 4740 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848697 4740 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848701 4740 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848705 4740 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848708 4740 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848712 4740 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848716 4740 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848720 4740 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848724 4740 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848728 4740 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848732 4740 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848739 4740 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848743 4740 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848748 4740 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848753 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848757 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848763 4740 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848767 4740 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848771 4740 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848775 4740 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848778 4740 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848782 4740 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848786 4740 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848790 4740 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848795 4740 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848798 4740 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848802 4740 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848806 4740 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848810 4740 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 15:55:52 crc kubenswrapper[4740]: W0130 15:55:52.848813 4740 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.848820 4740 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.849977 4740 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.855539 4740 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.855637 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.859015 4740 server.go:997] "Starting client certificate rotation" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.859041 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.866603 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 01:57:17.489843647 +0000 UTC Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.866760 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.946118 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 15:55:52 crc kubenswrapper[4740]: I0130 15:55:52.949046 4740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 15:55:52 crc kubenswrapper[4740]: E0130 15:55:52.949483 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.032640 4740 log.go:25] "Validated CRI v1 runtime API" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.101782 4740 log.go:25] "Validated CRI v1 image API" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.105004 4740 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.110050 4740 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-15-40-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.110083 4740 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.125628 4740 manager.go:217] Machine: {Timestamp:2026-01-30 15:55:53.123685214 +0000 UTC m=+1.760747833 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:cbfd1cc5-d98d-49aa-89cf-5db774a30b6e BootID:f3fdd8ea-a373-4a34-8018-9155cc4dd491 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:97:7c:72 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:97:7c:72 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:72:16:38 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:28:74:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:aa:a7:04 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3e:cc:56 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:e4:0f:80:5a:c3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:7c:24:7a:50:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.125882 4740 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.126038 4740 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.128042 4740 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.128234 4740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.128275 4740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.128531 4740 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.128545 4740 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.129186 4740 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.129221 4740 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.129515 4740 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.129681 4740 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.134295 4740 kubelet.go:418] "Attempting to sync node with API server" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.134332 4740 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.134377 4740 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.134394 4740 kubelet.go:324] "Adding apiserver pod source" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.134410 4740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 15:55:53 crc kubenswrapper[4740]: W0130 15:55:53.140515 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.140680 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:53 crc kubenswrapper[4740]: W0130 15:55:53.140516 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.140804 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.142312 4740 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.143578 4740 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.145849 4740 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147420 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147451 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147463 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147474 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147489 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147499 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147508 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147528 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147540 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147551 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147587 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147629 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.147668 4740 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.148261 4740 server.go:1280] "Started kubelet" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.148569 4740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.149380 4740 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.148928 4740 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.149944 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:53 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.153291 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.153309 4740 server.go:460] "Adding debug handlers to kubelet server" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.153386 4740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.154535 4740 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.154579 4740 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.154556 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:59:45.592043937 +0000 UTC Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.154671 4740 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.228079 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 15:55:53 crc kubenswrapper[4740]: W0130 15:55:53.228371 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.228466 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.228714 4740 factory.go:55] Registering systemd factory Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.228738 4740 factory.go:221] Registration of the systemd container factory successfully Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.228726 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="200ms" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.229259 4740 factory.go:153] Registering CRI-O factory Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.229291 4740 factory.go:221] Registration of the crio container factory successfully Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.229396 4740 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.229427 4740 factory.go:103] Registering Raw factory Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.229446 4740 manager.go:1196] Started watching for new ooms in manager Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.230736 4740 manager.go:319] Starting recovery of all containers Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.230654 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8d4ff5f3d9c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,LastTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235496 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235550 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235587 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235601 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235612 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235623 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235639 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235651 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235666 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235679 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235691 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235702 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235712 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235724 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235735 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235749 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235761 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235771 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235781 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235791 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235804 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235813 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235824 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235837 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235849 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235864 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235882 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235926 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235938 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.235953 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236000 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236039 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236049 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236060 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236071 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236083 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236095 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236112 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236127 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236141 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236155 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236170 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236184 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236198 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236212 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236225 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236241 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236258 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236270 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236283 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236297 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236312 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236331 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236365 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236379 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236394 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236410 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236426 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236441 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236454 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236468 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236481 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236494 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236511 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236524 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236535 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236548 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236560 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236576 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236588 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236600 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236611 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236623 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236634 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236650 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236664 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236677 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236690 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236702 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236718 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236732 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236746 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236762 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236778 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236794 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236805 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236818 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236835 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236850 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236863 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236875 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236888 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236900 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236912 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236923 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236940 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236953 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236965 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236977 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.236990 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237003 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237018 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237033 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237048 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237071 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237087 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237102 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237115 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237130 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237146 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237162 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237178 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237191 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237207 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237223 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237243 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237257 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237273 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237286 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237300 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237314 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237327 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237339 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237366 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237380 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.237395 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239241 4740 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239267 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239279 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239294 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239303 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239312 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239323 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239331 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239340 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239363 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239373 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239383 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239394 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239405 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239416 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239427 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239447 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239459 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239469 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239479 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239490 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239502 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239512 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239522 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239533 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239543 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239576 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239586 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239596 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239610 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239623 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239635 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239646 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239661 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239676 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239689 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239701 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239713 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239726 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239738 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239749 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239758 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239767 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239778 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239789 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239799 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239822 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239832 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239844 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239855 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239865 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239875 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239885 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239895 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239904 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239914 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239924 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239934 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239947 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239961 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239979 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.239991 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240005 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240017 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240030 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240042 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240055 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240071 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240085 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240096 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240111 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240123 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240138 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240151 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240164 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240181 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240194 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240206 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240218 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240230 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240615 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240634 4740 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240645 4740 reconstruct.go:97] "Volume reconstruction finished" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.240654 4740 reconciler.go:26] "Reconciler: start to sync state" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.252113 4740 manager.go:324] Recovery completed Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.262432 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.265013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.265056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.265069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.267216 4740 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.267248 4740 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.267272 4740 state_mem.go:36] "Initialized new in-memory state store" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.328501 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.331022 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.333502 4740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.333743 4740 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.333977 4740 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.334202 4740 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 15:55:53 crc kubenswrapper[4740]: W0130 15:55:53.335902 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.336167 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.428873 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.429542 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="400ms" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.435560 4740 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.517059 4740 policy_none.go:49] "None policy: Start" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.519062 4740 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.519234 4740 state_mem.go:35] "Initializing new in-memory state store" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.529309 4740 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.572985 4740 manager.go:334] "Starting Device Plugin manager" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573039 4740 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573053 4740 server.go:79] "Starting device plugin registration server" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573605 4740 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573626 4740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573801 4740 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573926 4740 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.573940 4740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.582014 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.636708 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.636860 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638555 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638896 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.638959 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639622 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639772 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.639826 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640056 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640832 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.640944 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641020 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641065 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641722 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641877 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641917 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.641920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.642999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.643011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.643214 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.643252 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.644111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.644130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.644140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.673824 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.674777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.674816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.674829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.674856 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.675471 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750777 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750857 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750923 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750971 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.750986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751176 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751195 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751229 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.751250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.831464 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="800ms" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.852799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.852895 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.852944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.852990 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853027 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853128 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853173 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853242 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853266 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853205 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853401 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853410 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853478 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853478 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853584 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853632 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853735 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.853644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.875619 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.877318 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.877424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.877449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.877498 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:53 crc kubenswrapper[4740]: E0130 15:55:53.878147 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.974413 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:55:53 crc kubenswrapper[4740]: I0130 15:55:53.992802 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.009443 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.019012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.026170 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.151915 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.154995 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:37:34.500509916 +0000 UTC Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.181161 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-39e0005fe916f4d192da8ee16007651661bb1fbe8c639b7f665acfbe8095f4fb WatchSource:0}: Error finding container 39e0005fe916f4d192da8ee16007651661bb1fbe8c639b7f665acfbe8095f4fb: Status 404 returned error can't find the container with id 39e0005fe916f4d192da8ee16007651661bb1fbe8c639b7f665acfbe8095f4fb Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.182784 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b9a1f437303930efed7da465b92db701f8f518649ba8b5a89d15bf5008a6f56e WatchSource:0}: Error finding container b9a1f437303930efed7da465b92db701f8f518649ba8b5a89d15bf5008a6f56e: Status 404 returned error can't find the container with id b9a1f437303930efed7da465b92db701f8f518649ba8b5a89d15bf5008a6f56e Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.183856 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-97a869cbe792690991b9920eef7230166c78a681c11d948992df49f57c19b335 WatchSource:0}: Error finding container 97a869cbe792690991b9920eef7230166c78a681c11d948992df49f57c19b335: Status 404 returned error can't find the container with id 97a869cbe792690991b9920eef7230166c78a681c11d948992df49f57c19b335 Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.184835 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-af26e14c709e70d49be820b68a0f5281877ccca43ea69478c753b5c05ef70527 WatchSource:0}: Error finding container af26e14c709e70d49be820b68a0f5281877ccca43ea69478c753b5c05ef70527: Status 404 returned error can't find the container with id af26e14c709e70d49be820b68a0f5281877ccca43ea69478c753b5c05ef70527 Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.186141 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a326872d7a72bf72320a2f49c53d992e8b68a3598fe304e518b702dcac744922 WatchSource:0}: Error finding container a326872d7a72bf72320a2f49c53d992e8b68a3598fe304e518b702dcac744922: Status 404 returned error can't find the container with id a326872d7a72bf72320a2f49c53d992e8b68a3598fe304e518b702dcac744922 Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.278275 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.280696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.280744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.280756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.280784 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.281297 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.341643 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"af26e14c709e70d49be820b68a0f5281877ccca43ea69478c753b5c05ef70527"} Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.343339 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"39e0005fe916f4d192da8ee16007651661bb1fbe8c639b7f665acfbe8095f4fb"} Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.344388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a326872d7a72bf72320a2f49c53d992e8b68a3598fe304e518b702dcac744922"} Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.345337 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97a869cbe792690991b9920eef7230166c78a681c11d948992df49f57c19b335"} Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.346319 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b9a1f437303930efed7da465b92db701f8f518649ba8b5a89d15bf5008a6f56e"} Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.372492 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.372654 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.386041 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.386124 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.412548 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.412672 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:54 crc kubenswrapper[4740]: W0130 15:55:54.528570 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.528692 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.633329 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="1.6s" Jan 30 15:55:54 crc kubenswrapper[4740]: I0130 15:55:54.991379 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 15:55:54 crc kubenswrapper[4740]: E0130 15:55:54.992868 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.081945 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.083612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.083657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.083672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.083700 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:55 crc kubenswrapper[4740]: E0130 15:55:55.084311 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.151057 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:55 crc kubenswrapper[4740]: I0130 15:55:55.155273 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:33:43.881629834 +0000 UTC Jan 30 15:55:55 crc kubenswrapper[4740]: E0130 15:55:55.406219 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8d4ff5f3d9c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,LastTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 15:55:56 crc kubenswrapper[4740]: W0130 15:55:56.069528 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.070594 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.151978 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.156029 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:16:31.253108752 +0000 UTC Jan 30 15:55:56 crc kubenswrapper[4740]: W0130 15:55:56.231580 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.231708 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.234468 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="3.2s" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.684486 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.687000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.687042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.687054 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:56 crc kubenswrapper[4740]: I0130 15:55:56.687085 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.687753 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:55:56 crc kubenswrapper[4740]: W0130 15:55:56.885743 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.885873 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:56 crc kubenswrapper[4740]: W0130 15:55:56.984231 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:56 crc kubenswrapper[4740]: E0130 15:55:56.984337 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.151703 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.156118 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:21:51.524264617 +0000 UTC Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.355299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52"} Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.356863 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a" exitCode=0 Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.356916 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a"} Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.357017 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.358078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.358139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.358152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.359498 4740 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05" exitCode=0 Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.359571 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05"} Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.359650 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.360822 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362411 4740 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8" exitCode=0 Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362516 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8"} Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.362527 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.363629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.363682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.363700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.364304 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66" exitCode=0 Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.364364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66"} Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.364486 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.365246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.365301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:57 crc kubenswrapper[4740]: I0130 15:55:57.365319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:58 crc kubenswrapper[4740]: I0130 15:55:58.150804 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:58 crc kubenswrapper[4740]: I0130 15:55:58.156340 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:15:01.672731868 +0000 UTC Jan 30 15:55:58 crc kubenswrapper[4740]: I0130 15:55:58.370157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6"} Jan 30 15:55:58 crc kubenswrapper[4740]: I0130 15:55:58.373218 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6"} Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.151937 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.157044 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:29:37.928221591 +0000 UTC Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.284655 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 15:55:59 crc kubenswrapper[4740]: E0130 15:55:59.286210 4740 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.379613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008"} Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.382858 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98" exitCode=0 Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.382963 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98"} Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.386132 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7"} Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.386238 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.387776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.387830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.387850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:59 crc kubenswrapper[4740]: E0130 15:55:59.436163 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="6.4s" Jan 30 15:55:59 crc kubenswrapper[4740]: W0130 15:55:59.451415 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:55:59 crc kubenswrapper[4740]: E0130 15:55:59.451530 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.887953 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.890749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.890886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.890908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:55:59 crc kubenswrapper[4740]: I0130 15:55:59.890987 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:55:59 crc kubenswrapper[4740]: E0130 15:55:59.892019 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.121:6443: connect: connection refused" node="crc" Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.151967 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.158043 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:26:11.827225598 +0000 UTC Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.394675 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467"} Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.395283 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.397557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.397610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:00 crc kubenswrapper[4740]: I0130 15:56:00.397628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:00 crc kubenswrapper[4740]: W0130 15:56:00.656198 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:00 crc kubenswrapper[4740]: E0130 15:56:00.656389 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:56:01 crc kubenswrapper[4740]: I0130 15:56:01.151850 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:01 crc kubenswrapper[4740]: I0130 15:56:01.159099 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:18:07.336013692 +0000 UTC Jan 30 15:56:01 crc kubenswrapper[4740]: I0130 15:56:01.401156 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca"} Jan 30 15:56:01 crc kubenswrapper[4740]: I0130 15:56:01.404633 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f"} Jan 30 15:56:01 crc kubenswrapper[4740]: W0130 15:56:01.625811 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:01 crc kubenswrapper[4740]: E0130 15:56:01.625939 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:56:01 crc kubenswrapper[4740]: W0130 15:56:01.979387 4740 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:01 crc kubenswrapper[4740]: E0130 15:56:01.979968 4740 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.121:6443: connect: connection refused" logger="UnhandledError" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.151182 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.159520 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:41:23.073919552 +0000 UTC Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.410855 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6"} Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.411066 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.412844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.412908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.412929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.417662 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b"} Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.417716 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a"} Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.421653 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac"} Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.422249 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.425911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.425969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.425987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.429933 4740 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e" exitCode=0 Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.429979 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e"} Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.430108 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.431134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.431225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:02 crc kubenswrapper[4740]: I0130 15:56:02.431307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.152051 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.160257 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:06:17.590866083 +0000 UTC Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.266626 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.438932 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e"} Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.438968 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.440534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.440580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.440591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.444080 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bf22ff785037823793cdf211ea74d0bb088e209e79b7bdcfa8868be2756fec2"} Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.444230 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.444311 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.444251 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445661 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:03 crc kubenswrapper[4740]: I0130 15:56:03.445734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:03 crc kubenswrapper[4740]: E0130 15:56:03.582264 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.068630 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.151716 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.161032 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:21:10.33519059 +0000 UTC Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.449975 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.449985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a187615099d8eb2316e90d5d3cf8f9193fdc55f2362c08440c00bdcac439cc1a"} Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.450049 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.449985 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.449976 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:04 crc kubenswrapper[4740]: I0130 15:56:04.451691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.150678 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.121:6443: connect: connection refused Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.161994 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:25:24.728705026 +0000 UTC Jan 30 15:56:05 crc kubenswrapper[4740]: E0130 15:56:05.407920 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f8d4ff5f3d9c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,LastTimestamp:2026-01-30 15:55:53.148225993 +0000 UTC m=+1.785288612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.459681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"779f48007b31f1c306d0ce8d2a473a667ebc1bb20af110df279c975b3417d328"} Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.459743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84e20da08abae11660e7c75659fa97583bce84e3e01492f36db2adb9e4d90514"} Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.459758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7831eaa07c89ee936c1eb0d2578e583401bfdf20a61449990a7975b3e2972a55"} Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.461800 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.464207 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e" exitCode=255 Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.464260 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e"} Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.464418 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.466648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.466688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.466708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:05 crc kubenswrapper[4740]: I0130 15:56:05.467441 4740 scope.go:117] "RemoveContainer" containerID="487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.163127 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:18:44.811466605 +0000 UTC Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.188129 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.292450 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.294247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.294301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.294322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.294399 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.470061 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.472761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4"} Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.472909 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.472860 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:06 crc kubenswrapper[4740]: I0130 15:56:06.474287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.164406 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:16:01.699487118 +0000 UTC Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.476173 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.476293 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.477768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.477828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.477848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.622560 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.622890 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.624777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.624851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.624871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.894485 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.894758 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.896474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.896518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.896537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:07 crc kubenswrapper[4740]: I0130 15:56:07.903091 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 15:56:08 crc kubenswrapper[4740]: I0130 15:56:08.166394 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:46:39.432134085 +0000 UTC Jan 30 15:56:08 crc kubenswrapper[4740]: I0130 15:56:08.480092 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:08 crc kubenswrapper[4740]: I0130 15:56:08.482003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:08 crc kubenswrapper[4740]: I0130 15:56:08.482071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:08 crc kubenswrapper[4740]: I0130 15:56:08.482095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:09 crc kubenswrapper[4740]: I0130 15:56:09.167468 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:31:03.418314572 +0000 UTC Jan 30 15:56:10 crc kubenswrapper[4740]: I0130 15:56:10.167676 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:39:42.019639754 +0000 UTC Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.168780 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:13:31.672983496 +0000 UTC Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.557549 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.557830 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.559663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.559733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.559755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.565562 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:11 crc kubenswrapper[4740]: I0130 15:56:11.583436 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.169688 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:40:10.786937849 +0000 UTC Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.493041 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.494809 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.494884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.494907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:12 crc kubenswrapper[4740]: I0130 15:56:12.499168 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.170072 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:14:37.146008559 +0000 UTC Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.495852 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.497343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.497476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.497506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:13 crc kubenswrapper[4740]: E0130 15:56:13.582543 4740 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.697003 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.697324 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.704826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.704924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.704971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:13 crc kubenswrapper[4740]: I0130 15:56:13.808592 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.170779 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:41:54.678096268 +0000 UTC Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.499031 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.499086 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.500991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.501023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.501071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.501085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.501041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.501196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.516821 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.558216 4740 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:56:14 crc kubenswrapper[4740]: I0130 15:56:14.558330 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:56:15 crc kubenswrapper[4740]: I0130 15:56:15.171276 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:28:43.002598484 +0000 UTC Jan 30 15:56:15 crc kubenswrapper[4740]: I0130 15:56:15.502213 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:15 crc kubenswrapper[4740]: I0130 15:56:15.503908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:15 crc kubenswrapper[4740]: I0130 15:56:15.503972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:15 crc kubenswrapper[4740]: I0130 15:56:15.503997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:15 crc kubenswrapper[4740]: E0130 15:56:15.837701 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 30 15:56:16 crc kubenswrapper[4740]: I0130 15:56:16.152165 4740 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 15:56:16 crc kubenswrapper[4740]: I0130 15:56:16.172056 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:18:41.624034464 +0000 UTC Jan 30 15:56:16 crc kubenswrapper[4740]: I0130 15:56:16.188487 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:56:16 crc kubenswrapper[4740]: I0130 15:56:16.188879 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:56:16 crc kubenswrapper[4740]: E0130 15:56:16.296146 4740 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 30 15:56:17 crc kubenswrapper[4740]: I0130 15:56:17.051376 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 15:56:17 crc kubenswrapper[4740]: I0130 15:56:17.051462 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 15:56:17 crc kubenswrapper[4740]: I0130 15:56:17.173197 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:45:00.65064601 +0000 UTC Jan 30 15:56:18 crc kubenswrapper[4740]: I0130 15:56:18.174645 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:32:05.962144248 +0000 UTC Jan 30 15:56:19 crc kubenswrapper[4740]: I0130 15:56:19.176060 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:17:12.073986845 +0000 UTC Jan 30 15:56:20 crc kubenswrapper[4740]: I0130 15:56:20.176214 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:50:34.568704443 +0000 UTC Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.177095 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:33:34.50404291 +0000 UTC Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.193005 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.193600 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.195008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.195064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.195078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.197527 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.550579 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.550648 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.551707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.551743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:21 crc kubenswrapper[4740]: I0130 15:56:21.551754 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.048326 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.178130 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:05:30.137754088 +0000 UTC Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.584298 4740 trace.go:236] Trace[1742256199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 15:56:07.096) (total time: 15487ms): Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[1742256199]: ---"Objects listed" error: 15487ms (15:56:22.584) Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[1742256199]: [15.487267767s] [15.487267767s] END Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.584331 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.585251 4740 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.585290 4740 trace.go:236] Trace[386352831]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 15:56:09.766) (total time: 12819ms): Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[386352831]: ---"Objects listed" error: 12818ms (15:56:22.585) Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[386352831]: [12.819046038s] [12.819046038s] END Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.585329 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.585406 4740 trace.go:236] Trace[1982988118]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 15:56:11.890) (total time: 10695ms): Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[1982988118]: ---"Objects listed" error: 10694ms (15:56:22.585) Jan 30 15:56:22 crc kubenswrapper[4740]: Trace[1982988118]: [10.695128963s] [10.695128963s] END Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.585423 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.590024 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.592172 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51090->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.592249 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51090->192.168.126.11:17697: read: connection reset by peer" Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.594024 4740 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.594055 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.796402 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:22 crc kubenswrapper[4740]: I0130 15:56:22.799738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.155728 4740 apiserver.go:52] "Watching apiserver" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.160165 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.160687 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.161162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.161283 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.161633 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.161746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.161842 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.161936 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.162069 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.162135 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.162218 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.168543 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.168577 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.168546 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.170837 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.171446 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.173084 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.173447 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.174114 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.174139 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.178751 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:48:04.879715999 +0000 UTC Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.225828 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.235822 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.247131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.256042 4740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.272921 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.289907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.289968 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290030 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290062 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290085 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290129 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290172 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290197 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290221 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290242 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290264 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290298 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290320 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290341 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290386 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290432 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290457 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290484 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290511 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290554 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290576 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290603 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290626 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290669 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290746 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290769 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290792 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290839 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290864 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290913 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291052 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291098 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291122 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291184 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291204 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291223 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291244 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291265 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291287 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291344 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291383 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291405 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291423 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291444 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291465 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291491 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291539 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291607 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291638 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291661 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291690 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293228 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293365 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293410 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293449 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293479 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293515 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293587 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293640 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293676 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293711 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293743 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293779 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293813 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293841 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293879 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294033 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294075 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294183 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294231 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294276 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294699 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294733 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294840 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294948 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295108 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295181 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295334 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296116 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296157 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296196 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296230 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296257 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296445 4740 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296476 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296522 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296615 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296906 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297369 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297712 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297973 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.290896 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291080 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.298471 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.298795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.299284 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.298010 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.301163 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291418 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291464 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.292871 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293127 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.293931 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294685 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.291722 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294833 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.294892 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295658 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295674 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.295759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296334 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296395 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296509 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.296908 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297393 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297505 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.297847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.302095 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.302647 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.309750 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.313712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317037 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317236 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317254 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317250 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.317975 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318174 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318380 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318610 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318817 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318828 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.318963 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319262 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319482 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.319866 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320114 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320213 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320509 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320639 4740 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320827 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.320972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321120 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.321917 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322156 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322376 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322400 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.301201 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322571 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322590 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322622 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322679 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322864 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322877 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322942 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322961 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.322981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323000 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323022 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323042 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323077 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323095 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323150 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323188 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323207 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323225 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323242 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323260 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323289 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323155 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323170 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323534 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323680 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323800 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.323930 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324140 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324254 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324371 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324405 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324436 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324465 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324494 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324520 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324544 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324570 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324622 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324757 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324799 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324848 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324880 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324907 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324931 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324957 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.324983 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325008 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325032 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325083 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325111 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325140 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325166 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325195 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325220 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325246 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325304 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325331 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325374 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325402 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325428 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325454 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325479 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325584 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325616 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325648 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325826 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325945 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.325971 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326055 4740 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326073 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326104 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326127 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326175 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326189 4740 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326202 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326217 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326234 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326248 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326266 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326280 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.326610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.327540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.327588 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.328016 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.329149 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.329253 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.329674 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.329944 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.330229 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.330514 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.330761 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.330884 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331137 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331251 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331253 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331445 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331821 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331826 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.331912 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332233 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332338 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332157 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332558 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332636 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332647 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332851 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332927 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.332970 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.333238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.333611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.333796 4740 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.367049 4740 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.333834 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334077 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334141 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334158 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334268 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.334293 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.341812 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.342030 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.342755 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.348710 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.348890 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.348954 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.349101 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.349236 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.351796 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.351944 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.352033 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.356203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.359457 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.359965 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.360457 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.360673 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.360928 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.360960 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.361186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.361330 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.361706 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.362145 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.363312 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.363513 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.365111 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.365540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.365577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.366012 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.366302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.366885 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.360317 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.368788 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369230 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.363017 4740 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369515 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.369863 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.370022 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.362268 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.371218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.363746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.371972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.371982 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.372190 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.372588 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.372692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.372782 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.373126 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:23.873077065 +0000 UTC m=+32.510139664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.373161 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:23.873152197 +0000 UTC m=+32.510215006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.373179 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:23.873172138 +0000 UTC m=+32.510234737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.373281 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.373487 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.373881 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.374171 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.376384 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.376416 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.377162 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.377610 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.377937 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.378414 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.378942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.379140 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.379396 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.379538 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.309138 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.379726 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.379981 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.381345 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.381618 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.381737 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.381760 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.381774 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.381869 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:23.881845554 +0000 UTC m=+32.518908153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.382091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.382397 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.382923 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.383021 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.384438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.384722 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.384813 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.384835 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.384850 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.384913 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:23.88488605 +0000 UTC m=+32.521948649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.385125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.385867 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.388220 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.388629 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.389195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.389443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.404099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.404156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.404168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.404192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.404207 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.407000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.407127 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.408626 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.409200 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.410119 4740 csr.go:261] certificate signing request csr-x6dpl is approved, waiting to be issued Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.410806 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.411411 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.411528 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.412248 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.412956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.414331 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.414545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.420695 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.421597 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.421995 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.422838 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.424780 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.428602 4740 csr.go:257] certificate signing request csr-x6dpl is issued Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429453 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.429824 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.446605 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449546 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449799 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449816 4740 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449840 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449853 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449867 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449889 4740 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449901 4740 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449914 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449924 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449933 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449944 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449956 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449966 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449975 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.449985 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450010 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450020 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450031 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450045 4740 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450056 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450066 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450076 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450089 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450098 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450107 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450116 4740 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450129 4740 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450138 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450147 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450158 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450169 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450178 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450187 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450200 4740 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450234 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450245 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450254 4740 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450266 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450275 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450284 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450315 4740 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450327 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450336 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450376 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450389 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450401 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450414 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450423 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450436 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450446 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450456 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450466 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450480 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450499 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450508 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450525 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450536 4740 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450556 4740 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450565 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450576 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450595 4740 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450603 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450613 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450624 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450633 4740 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450642 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450651 4740 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450663 4740 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450671 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450680 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450702 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450713 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450722 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450740 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450753 4740 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450762 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450771 4740 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450786 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450798 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450807 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450816 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450824 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450837 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450846 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450873 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450882 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450896 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450905 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450913 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450924 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450932 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450940 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450949 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450959 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450969 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450977 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450985 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.450996 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451008 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451016 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451032 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451044 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451053 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451064 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451075 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451083 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451098 4740 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451106 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451118 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451128 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451136 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451148 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451162 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451171 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451179 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451189 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451198 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451206 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451215 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451229 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451236 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451256 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451274 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451287 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451295 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451304 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451315 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451324 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451332 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451341 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451365 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451373 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451382 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451390 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451402 4740 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451420 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451429 4740 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451437 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451455 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451471 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451480 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451498 4740 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451524 4740 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451531 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451542 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451552 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451561 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451568 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451576 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451587 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451595 4740 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451604 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451611 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451622 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451639 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451650 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451661 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451671 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451679 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451700 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451711 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451719 4740 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451729 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451737 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451748 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451756 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451790 4740 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451802 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451812 4740 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451821 4740 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451830 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451845 4740 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451858 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451868 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451881 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451896 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451926 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451949 4740 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.451988 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.452046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.452525 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.452540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.453246 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.453834 4740 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.454025 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.455953 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.456281 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461335 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.461856 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.462430 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.463509 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.465143 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.465890 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.467482 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.468282 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.468456 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.469374 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.469835 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.470953 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.472115 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.472427 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.474502 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.475428 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.475549 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476220 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.476424 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.477663 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.478263 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.479780 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.480269 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.481409 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.481918 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.482435 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.483645 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.484108 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.485831 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.490374 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.504377 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.504537 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.511653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.511698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.511710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.511730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.511742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.522406 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.552513 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.552544 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.552556 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.552798 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.583012 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.597618 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f300ab0b672adc11b06bb00fb5c6236865eea3f5d0bfe50b6cfeb1281725dc7b"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.605500 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"51b15d09457cd56e9bf7ff6b3faf2fe5e051fe5fd8856366ddaecfd4d83b1fce"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.617813 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9bdd133cd43fa113a462d2e82cb60eb403c19de88394e0dd4bc8189521950be1"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.619504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.619547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.619557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.619575 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.619589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.621543 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.625304 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.634236 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.643704 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" exitCode=255 Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.644496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.644544 4740 scope.go:117] "RemoveContainer" containerID="487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.664098 4740 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.665137 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.675896 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.690807 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.701899 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.702269 4740 scope.go:117] "RemoveContainer" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.702484 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.714318 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.725340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.725407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.725418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.725451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.725463 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.738757 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.746941 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.762324 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.828413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.828473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.828485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.828506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.828518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.859752 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xtbq6"] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.860203 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.862548 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.862637 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.862771 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.879512 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:05Z\\\",\\\"message\\\":\\\"W0130 15:56:04.193572 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 15:56:04.194319 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769788564 cert, and key in /tmp/serving-cert-1801888733/serving-signer.crt, /tmp/serving-cert-1801888733/serving-signer.key\\\\nI0130 15:56:04.751266 1 observer_polling.go:159] Starting file observer\\\\nW0130 15:56:04.758673 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 15:56:04.758930 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:04.759929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801888733/tls.crt::/tmp/serving-cert-1801888733/tls.key\\\\\\\"\\\\nF0130 15:56:05.131415 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.899721 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.932736 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.933105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.933132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.933142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.933158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.933183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:23Z","lastTransitionTime":"2026-01-30T15:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.956088 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.956244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.956377 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.956422 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:24.95638299 +0000 UTC m=+33.593445619 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.956520 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:24.956483602 +0000 UTC m=+33.593546191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.956946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.957128 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwkv\" (UniqueName: \"kubernetes.io/projected/f16748fa-365c-4996-856a-4cd9a1166795-kube-api-access-xvwkv\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957093 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957195 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:24.95718826 +0000 UTC m=+33.594250859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.957223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957302 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957387 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957402 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.957467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.957489 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f16748fa-365c-4996-856a-4cd9a1166795-hosts-file\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957568 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957568 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:24.957552189 +0000 UTC m=+33.594614788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957581 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957608 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: E0130 15:56:23.957638 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:24.957628751 +0000 UTC m=+33.594691530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.966850 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.980792 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:23 crc kubenswrapper[4740]: I0130 15:56:23.992925 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.004827 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.017940 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.029184 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.036046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.036110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.036120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.036157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.036171 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.058514 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwkv\" (UniqueName: \"kubernetes.io/projected/f16748fa-365c-4996-856a-4cd9a1166795-kube-api-access-xvwkv\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.058591 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f16748fa-365c-4996-856a-4cd9a1166795-hosts-file\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.058707 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f16748fa-365c-4996-856a-4cd9a1166795-hosts-file\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.085405 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwkv\" (UniqueName: \"kubernetes.io/projected/f16748fa-365c-4996-856a-4cd9a1166795-kube-api-access-xvwkv\") pod \"node-resolver-xtbq6\" (UID: \"f16748fa-365c-4996-856a-4cd9a1166795\") " pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.133963 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7c7j6"] Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.134499 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.135711 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pkzlw"] Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.136011 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-g5497"] Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.136107 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.136746 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.137495 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138192 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138585 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138565 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.138937 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.139503 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.140104 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.140493 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.140575 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.141571 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.141921 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.142147 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.145500 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.149990 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.175624 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xtbq6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.179062 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:40:12.985031839 +0000 UTC Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.217623 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.242456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.242509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.242522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.242543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.242555 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.263829 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-mcd-auth-proxy-config\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.263906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-conf-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.263932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-multus-daemon-config\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.263987 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-socket-dir-parent\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-system-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264061 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbh6w\" (UniqueName: \"kubernetes.io/projected/5ece215f-ed67-4d10-8e39-85d49a052d52-kube-api-access-gbh6w\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264083 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-multus-certs\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264125 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-cnibin\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-os-release\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-netns\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264363 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-system-cni-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-rootfs\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264464 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54m64\" (UniqueName: \"kubernetes.io/projected/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-kube-api-access-54m64\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264512 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-multus\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264534 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-os-release\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264563 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-k8s-cni-cncf-io\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264608 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-kubelet\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264639 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-etc-kubernetes\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-binary-copy\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264699 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-cnibin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxcq5\" (UniqueName: \"kubernetes.io/projected/e65088cb-e700-4af1-b788-af399f918bd0-kube-api-access-wxcq5\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264760 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264781 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-proxy-tls\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264801 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-cni-binary-copy\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-bin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.264861 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-hostroot\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.287845 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.325665 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.335931 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.336125 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.345444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.345485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.345496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.345515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.345530 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.355552 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-os-release\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-netns\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-system-cni-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-rootfs\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366133 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54m64\" (UniqueName: \"kubernetes.io/projected/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-kube-api-access-54m64\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-multus\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-os-release\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-rootfs\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366245 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-k8s-cni-cncf-io\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366244 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-netns\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-etc-kubernetes\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366268 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-etc-kubernetes\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366324 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-multus\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366375 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-os-release\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366433 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-os-release\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366489 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-binary-copy\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-k8s-cni-cncf-io\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366511 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-system-cni-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366659 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-kubelet\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-cnibin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-proxy-tls\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-cni-binary-copy\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-bin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-cnibin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-hostroot\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366816 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-kubelet\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366863 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-var-lib-cni-bin\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366896 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-hostroot\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366825 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxcq5\" (UniqueName: \"kubernetes.io/projected/e65088cb-e700-4af1-b788-af399f918bd0-kube-api-access-wxcq5\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.366932 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-mcd-auth-proxy-config\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-binary-copy\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-conf-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367605 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-multus-daemon-config\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-mcd-auth-proxy-config\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367655 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-socket-dir-parent\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-system-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbh6w\" (UniqueName: \"kubernetes.io/projected/5ece215f-ed67-4d10-8e39-85d49a052d52-kube-api-access-gbh6w\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-socket-dir-parent\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367737 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-multus-certs\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367769 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-cnibin\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367691 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-multus-conf-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367823 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-cnibin\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5ece215f-ed67-4d10-8e39-85d49a052d52-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-system-cni-dir\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.367930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e65088cb-e700-4af1-b788-af399f918bd0-host-run-multus-certs\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.368498 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-multus-daemon-config\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.368803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e65088cb-e700-4af1-b788-af399f918bd0-cni-binary-copy\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.370094 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-proxy-tls\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.371734 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:05Z\\\",\\\"message\\\":\\\"W0130 15:56:04.193572 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 15:56:04.194319 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769788564 cert, and key in /tmp/serving-cert-1801888733/serving-signer.crt, /tmp/serving-cert-1801888733/serving-signer.key\\\\nI0130 15:56:04.751266 1 observer_polling.go:159] Starting file observer\\\\nW0130 15:56:04.758673 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 15:56:04.758930 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:04.759929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801888733/tls.crt::/tmp/serving-cert-1801888733/tls.key\\\\\\\"\\\\nF0130 15:56:05.131415 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.380526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5ece215f-ed67-4d10-8e39-85d49a052d52-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.385241 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbh6w\" (UniqueName: \"kubernetes.io/projected/5ece215f-ed67-4d10-8e39-85d49a052d52-kube-api-access-gbh6w\") pod \"multus-additional-cni-plugins-g5497\" (UID: \"5ece215f-ed67-4d10-8e39-85d49a052d52\") " pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.389557 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxcq5\" (UniqueName: \"kubernetes.io/projected/e65088cb-e700-4af1-b788-af399f918bd0-kube-api-access-wxcq5\") pod \"multus-pkzlw\" (UID: \"e65088cb-e700-4af1-b788-af399f918bd0\") " pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.391635 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.391751 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54m64\" (UniqueName: \"kubernetes.io/projected/139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9-kube-api-access-54m64\") pod \"machine-config-daemon-7c7j6\" (UID: \"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\") " pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.400956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.413872 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.424288 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.430453 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.430416 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 15:51:23 +0000 UTC, rotation deadline is 2026-11-15 12:21:29.064971234 +0000 UTC Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.430508 4740 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6932h25m4.634465874s for next certificate rotation Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.437646 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448344 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448442 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.448564 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.453675 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.457290 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.462972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pkzlw" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.465814 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.476845 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g5497" Jan 30 15:56:24 crc kubenswrapper[4740]: W0130 15:56:24.481280 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139658c1_36a2_4af9_bdfd_2bc3f9e6dcc9.slice/crio-2504c7a5474c8dbd6ec52663fd9f66eaad6895c3c7bcde22e0bfd0a6c2aa70f4 WatchSource:0}: Error finding container 2504c7a5474c8dbd6ec52663fd9f66eaad6895c3c7bcde22e0bfd0a6c2aa70f4: Status 404 returned error can't find the container with id 2504c7a5474c8dbd6ec52663fd9f66eaad6895c3c7bcde22e0bfd0a6c2aa70f4 Jan 30 15:56:24 crc kubenswrapper[4740]: W0130 15:56:24.482430 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode65088cb_e700_4af1_b788_af399f918bd0.slice/crio-37c3afe77dff601c3d6de9838f601d6bcdf5d19e019829a2f3329b374cfca4d1 WatchSource:0}: Error finding container 37c3afe77dff601c3d6de9838f601d6bcdf5d19e019829a2f3329b374cfca4d1: Status 404 returned error can't find the container with id 37c3afe77dff601c3d6de9838f601d6bcdf5d19e019829a2f3329b374cfca4d1 Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.485029 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: W0130 15:56:24.495404 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ece215f_ed67_4d10_8e39_85d49a052d52.slice/crio-c506e87fb4b99a0e6f5226f840d92e5d63e8560ba88be825212296402e16a4b2 WatchSource:0}: Error finding container c506e87fb4b99a0e6f5226f840d92e5d63e8560ba88be825212296402e16a4b2: Status 404 returned error can't find the container with id c506e87fb4b99a0e6f5226f840d92e5d63e8560ba88be825212296402e16a4b2 Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.503052 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:05Z\\\",\\\"message\\\":\\\"W0130 15:56:04.193572 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 15:56:04.194319 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769788564 cert, and key in /tmp/serving-cert-1801888733/serving-signer.crt, /tmp/serving-cert-1801888733/serving-signer.key\\\\nI0130 15:56:04.751266 1 observer_polling.go:159] Starting file observer\\\\nW0130 15:56:04.758673 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 15:56:04.758930 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:04.759929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801888733/tls.crt::/tmp/serving-cert-1801888733/tls.key\\\\\\\"\\\\nF0130 15:56:05.131415 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.516084 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.526912 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.538432 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.552191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.552241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.552254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.552282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.552299 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.558911 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.570861 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.578013 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhsjm"] Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.578981 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582056 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582173 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582395 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582479 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582535 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.582575 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.585178 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.588375 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.603111 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.622184 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.637537 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.647793 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.648708 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerStarted","Data":"37c3afe77dff601c3d6de9838f601d6bcdf5d19e019829a2f3329b374cfca4d1"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.652032 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.655330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.655396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.655412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.655436 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.655449 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.657786 4740 scope.go:117] "RemoveContainer" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.657958 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.661703 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487ae44143c58a632b68ef4565bffaed8e45f7b95d8ed357ef669d2b63eaaf7e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:05Z\\\",\\\"message\\\":\\\"W0130 15:56:04.193572 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 15:56:04.194319 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769788564 cert, and key in /tmp/serving-cert-1801888733/serving-signer.crt, /tmp/serving-cert-1801888733/serving-signer.key\\\\nI0130 15:56:04.751266 1 observer_polling.go:159] Starting file observer\\\\nW0130 15:56:04.758673 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 15:56:04.758930 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:04.759929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801888733/tls.crt::/tmp/serving-cert-1801888733/tls.key\\\\\\\"\\\\nF0130 15:56:05.131415 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.672861 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.676708 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerStarted","Data":"c506e87fb4b99a0e6f5226f840d92e5d63e8560ba88be825212296402e16a4b2"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.680147 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.682863 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"2504c7a5474c8dbd6ec52663fd9f66eaad6895c3c7bcde22e0bfd0a6c2aa70f4"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.689899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xtbq6" event={"ID":"f16748fa-365c-4996-856a-4cd9a1166795","Type":"ContainerStarted","Data":"a7880a696fc27952e9405ef38a0a9b706addeb490f357100482ad3055ea9c1c4"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.692984 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.697297 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.707977 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.720215 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.736545 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.745197 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.757810 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.759549 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.759582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.759590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.759606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.759617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.769144 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772392 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772469 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772488 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772538 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772690 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772823 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772889 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772910 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.772988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773065 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773083 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773151 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773220 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773240 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hnwb\" (UniqueName: \"kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.773297 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.780094 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.788710 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.825209 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.862887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.862933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.862943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.862962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.862972 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.868742 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874160 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874204 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874223 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874264 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hnwb\" (UniqueName: \"kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874337 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874382 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874438 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874462 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874493 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874532 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874574 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874657 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874744 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.874890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875089 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875143 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875165 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875175 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875201 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875274 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875280 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875814 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.875990 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.876611 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.876746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.879962 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.918029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hnwb\" (UniqueName: \"kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb\") pod \"ovnkube-node-jhsjm\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.920884 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.930240 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.965155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.965386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.965649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.965942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.966233 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:24Z","lastTransitionTime":"2026-01-30T15:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.966406 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:24 crc kubenswrapper[4740]: W0130 15:56:24.972404 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c06ab51_b857_47c7_a13a_e64edae96756.slice/crio-ec052aa91ddf29205cfa35c0846942ec93588c0c6cc2314d26df2c9ef6ca3057 WatchSource:0}: Error finding container ec052aa91ddf29205cfa35c0846942ec93588c0c6cc2314d26df2c9ef6ca3057: Status 404 returned error can't find the container with id ec052aa91ddf29205cfa35c0846942ec93588c0c6cc2314d26df2c9ef6ca3057 Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.975854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.976624 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:26.976581004 +0000 UTC m=+35.613643623 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.976875 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.977043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977091 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977290 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:26.977278612 +0000 UTC m=+35.614341221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.977216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977210 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: I0130 15:56:24.977370 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977467 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:26.977433136 +0000 UTC m=+35.614495735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977509 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977524 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977539 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.977572 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:26.977563809 +0000 UTC m=+35.614626418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.978649 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.978677 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.978704 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:24 crc kubenswrapper[4740]: E0130 15:56:24.978740 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:26.978732228 +0000 UTC m=+35.615794827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.018092 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.052548 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.068742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.068807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.068821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.068843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.068854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.089885 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.124297 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.167573 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179158 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:10:40.73937362 +0000 UTC Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.179298 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.205754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.243678 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.281760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.281810 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.281851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.281870 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.281883 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.335436 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:25 crc kubenswrapper[4740]: E0130 15:56:25.335594 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.335612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:25 crc kubenswrapper[4740]: E0130 15:56:25.335807 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.339679 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.340453 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.341592 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.342312 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.343467 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.344030 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.344702 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.345776 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.346477 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.347439 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.347977 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.349088 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.349676 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.350370 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.351419 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.351872 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.352755 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.353383 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.384697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.384968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.385074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.385177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.385263 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.487100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.487155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.487167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.487190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.487205 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.590975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.591027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.591041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.591060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.591072 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.693198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.693239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.693253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.693272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.693284 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.696517 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerStarted","Data":"b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.698451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.699951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerStarted","Data":"db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.701274 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.702608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xtbq6" event={"ID":"f16748fa-365c-4996-856a-4cd9a1166795","Type":"ContainerStarted","Data":"b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.703773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"ec052aa91ddf29205cfa35c0846942ec93588c0c6cc2314d26df2c9ef6ca3057"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.704471 4740 scope.go:117] "RemoveContainer" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" Jan 30 15:56:25 crc kubenswrapper[4740]: E0130 15:56:25.704648 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.712015 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.726595 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.736108 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.745100 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.753307 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.760832 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.769068 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.776997 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.785467 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.794697 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.796159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.796192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.796203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.796221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.796233 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.803999 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.812166 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.818729 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.828783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.843476 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.883489 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.899152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.899209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.899224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.899247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.899261 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:25Z","lastTransitionTime":"2026-01-30T15:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.923292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:25 crc kubenswrapper[4740]: I0130 15:56:25.964769 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.001335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.001510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.001576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.001655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.001734 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.022535 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.052803 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.094598 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.104718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.104757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.104766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.104782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.104793 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.142138 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.180037 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:20:22.663909473 +0000 UTC Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.184007 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.204240 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.206817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.206940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.207061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.207148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.207222 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.243120 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.281784 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.309576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.309814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.309914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.310025 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.310112 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.334917 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.335046 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.413151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.413516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.413609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.413702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.413797 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.516431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.516742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.516830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.516915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.517004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.620186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.620544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.620628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.620715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.620775 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.725000 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" exitCode=0 Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.725089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.726704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.726762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.726781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.726804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.726823 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.744807 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.759096 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.770250 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.778544 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.788818 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.801606 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.823096 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.829423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.829475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.829484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.829501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.829513 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.840823 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.855090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.869959 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.882672 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.897417 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.912712 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.923977 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.932239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.932285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.932296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.932315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.932328 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:26Z","lastTransitionTime":"2026-01-30T15:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.943185 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.954409 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.962968 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.998581 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.998728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.998759 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.998786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:26 crc kubenswrapper[4740]: I0130 15:56:26.998808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.998927 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.998987 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:30.998967566 +0000 UTC m=+39.636030165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999381 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999409 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999421 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999426 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999397 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:30.999388277 +0000 UTC m=+39.636450876 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999518 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:30.99950515 +0000 UTC m=+39.636567749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999540 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:30.99953262 +0000 UTC m=+39.636595219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999658 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999742 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999805 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:26 crc kubenswrapper[4740]: E0130 15:56:26.999944 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:30.99991345 +0000 UTC m=+39.636976049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.012170 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.034781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.034823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.034835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.034854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.034866 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.047623 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.092555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.123620 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.137447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.137487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.137500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.137521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.137537 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.163097 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.181254 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:52:36.137219154 +0000 UTC Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.209899 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.240553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.240895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.240995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.241077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.241158 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.246601 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.284858 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.325514 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.334846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.334898 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:27 crc kubenswrapper[4740]: E0130 15:56:27.335237 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:27 crc kubenswrapper[4740]: E0130 15:56:27.335249 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.344110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.344167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.344179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.344199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.344214 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.446271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.446331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.446346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.446396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.446414 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.548884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.548938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.548947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.548965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.548978 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.657145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.657184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.657195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.657214 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.657224 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.729789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.732247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.733941 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58" exitCode=0 Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.734018 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.743512 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.757783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.759926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.759995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.760013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.760046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.760070 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.775264 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.791101 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.814556 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.835726 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.848277 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.864459 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.869391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.869422 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.869431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.871169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.871197 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.879443 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.889390 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.901428 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.911991 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.921770 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.939325 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.970584 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.975575 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.975616 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.975625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.975647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.975660 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:27Z","lastTransitionTime":"2026-01-30T15:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:27 crc kubenswrapper[4740]: I0130 15:56:27.993497 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.011450 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.047491 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.087159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.087222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.087234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.087270 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.087284 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.096224 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.126812 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.146192 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2pc22"] Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.146991 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.167890 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.178423 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.182270 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:06:57.79627977 +0000 UTC Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.190678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.190734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.190747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.190771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.190784 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.198708 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.218621 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.237634 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.288691 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.293491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.293525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.293534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.293552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.293564 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.315359 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-serviceca\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.315427 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkcn\" (UniqueName: \"kubernetes.io/projected/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-kube-api-access-wvkcn\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.315451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-host\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.328543 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.334828 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:28 crc kubenswrapper[4740]: E0130 15:56:28.335005 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.366220 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.395902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.395943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.395954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.395993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.396004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.416922 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkcn\" (UniqueName: \"kubernetes.io/projected/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-kube-api-access-wvkcn\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.416978 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-host\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.417030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-serviceca\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.417450 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-host\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.418264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-serviceca\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.427261 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.441313 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkcn\" (UniqueName: \"kubernetes.io/projected/8d02a598-e35a-4a24-bcf9-dc941d1d92d3-kube-api-access-wvkcn\") pod \"node-ca-2pc22\" (UID: \"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\") " pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.462994 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2pc22" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.469448 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.497820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.497854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.497863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.497880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.497892 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.511489 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.544977 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.592995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.599735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.599788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.599800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.599827 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.599840 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.627297 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.672590 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.702825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.702875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.702888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.702908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.702935 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.704040 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.746286 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.748947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerStarted","Data":"f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.751661 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2pc22" event={"ID":"8d02a598-e35a-4a24-bcf9-dc941d1d92d3","Type":"ContainerStarted","Data":"8d4ef446722b7a1d74682685efdc83c0d81e1d9d773560ff28d68ca5c91c6a80"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.754293 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.784654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.806079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.806114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.806123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.806208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.806222 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.828822 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.866231 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.907774 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.909126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.909174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.909188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.909205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.909214 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:28Z","lastTransitionTime":"2026-01-30T15:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.953434 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:28 crc kubenswrapper[4740]: I0130 15:56:28.992923 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:28Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.011655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.011714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.011729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.011753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.011766 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.032286 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.114619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.114690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.114717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.114743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.114761 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.182662 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:26:48.288939469 +0000 UTC Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.223668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.223721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.223731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.223751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.223763 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.326168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.326207 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.326217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.326234 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.326246 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.334584 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.334659 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:29 crc kubenswrapper[4740]: E0130 15:56:29.334782 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:29 crc kubenswrapper[4740]: E0130 15:56:29.334917 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.428479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.428518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.428526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.428541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.428552 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.531564 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.531631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.531654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.531687 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.531713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.634712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.634742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.634750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.634765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.634774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.737042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.737094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.737105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.737124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.737134 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.760662 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.760720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.760733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.762725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2pc22" event={"ID":"8d02a598-e35a-4a24-bcf9-dc941d1d92d3","Type":"ContainerStarted","Data":"779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.775843 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.788437 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.801834 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.818723 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.833040 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.840595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.840637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.840645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.840662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.840671 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.844800 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.859659 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.877306 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.889436 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.906365 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.924629 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.941598 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.943556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.943584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.943595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.943616 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.943626 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:29Z","lastTransitionTime":"2026-01-30T15:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.967951 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.979122 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:29 crc kubenswrapper[4740]: I0130 15:56:29.991327 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:29Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.007486 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.022561 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.037128 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.046220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.046254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.046263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.046279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.046291 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.051248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.070866 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.097222 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.120894 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.143069 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.149595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.149923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.150046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.150172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.150315 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.183787 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:41:13.066938471 +0000 UTC Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.191233 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.223947 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.250386 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.253074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.253123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.253135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.253157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.253168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.267783 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.287191 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:30Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.338643 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:30 crc kubenswrapper[4740]: E0130 15:56:30.338849 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.355846 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.355880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.355888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.355904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.355913 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.458526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.458555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.458563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.458576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.458584 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.561533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.561580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.561591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.561608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.561619 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.664198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.664242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.664252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.664272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.664284 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.766630 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.766733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.766753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.766817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.766842 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.770644 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.770719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.869553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.869604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.869620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.869647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.869665 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.972447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.972491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.972500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.972536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:30 crc kubenswrapper[4740]: I0130 15:56:30.972549 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:30Z","lastTransitionTime":"2026-01-30T15:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.056476 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.056593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.056627 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.056673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.056695 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056761 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056762 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.056711563 +0000 UTC m=+47.693774202 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056845 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.056821605 +0000 UTC m=+47.693884424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056919 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056927 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057057 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.057025451 +0000 UTC m=+47.694088300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056938 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057118 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057167 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.057151104 +0000 UTC m=+47.694213703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.056949 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057192 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057201 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.057231 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.057224685 +0000 UTC m=+47.694287284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.075757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.075840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.075852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.075875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.075889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.178533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.178584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.178598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.178618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.178631 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.184367 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:14:54.976253233 +0000 UTC Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.282770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.282867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.282890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.282923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.282946 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.335060 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.335060 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.335286 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:31 crc kubenswrapper[4740]: E0130 15:56:31.335334 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.388167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.388226 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.388242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.388266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.388282 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.491847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.491925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.491943 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.491978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.491997 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.595083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.595153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.595175 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.595204 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.595222 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.698884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.698937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.698952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.698974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.698990 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.784373 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0" exitCode=0 Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.784393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.800091 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.802153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.802205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.802218 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.802263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.802277 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.815825 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.834892 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.852654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.867091 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.883479 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.899424 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.905028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.905111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.905134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.905166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.905183 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:31Z","lastTransitionTime":"2026-01-30T15:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.912760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.925012 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.940341 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.962154 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.977800 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:31 crc kubenswrapper[4740]: I0130 15:56:31.994460 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:31Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.008710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.008746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.008758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.008776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.008812 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.009645 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.113752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.113794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.113805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.113823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.113834 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.185379 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:02:11.914277938 +0000 UTC Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.216941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.216981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.216989 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.217006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.217017 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.319418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.319982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.320003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.320035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.320054 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.335267 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:32 crc kubenswrapper[4740]: E0130 15:56:32.335444 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.422948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.423017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.423029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.423065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.423080 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.526905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.526955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.526968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.526988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.527005 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.629054 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.629139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.629163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.629190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.629208 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.732147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.732201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.732213 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.732235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.732254 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.791690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.793920 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7" exitCode=0 Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.793978 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.810522 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.825247 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.834322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.834382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.834395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.834416 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.834429 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.836425 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.847651 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.857920 4740 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.860040 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g/status\": read tcp 38.129.56.121:38176->38.129.56.121:6443: use of closed network connection" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.903166 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.916260 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.928055 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.936608 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.936643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.936653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.936667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.936679 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:32Z","lastTransitionTime":"2026-01-30T15:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.943880 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.962127 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.976268 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.987697 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:32 crc kubenswrapper[4740]: I0130 15:56:32.998867 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:32Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.010911 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.038779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.038828 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.038838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.038854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.038863 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.142315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.142391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.142402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.142420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.142435 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.185584 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:35:50.075941908 +0000 UTC Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.245049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.245124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.245165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.245193 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.245209 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.334430 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.334557 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.334611 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.334810 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.347226 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.348091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.348116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.348125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.348139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.348149 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.361651 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.374501 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.385751 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.397653 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.414162 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.437483 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.450420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.450453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.450461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.450479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.450492 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.458646 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.483377 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.499804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.511423 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.529254 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.543635 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.553168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.553211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.553224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.553242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.553254 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.555383 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.656497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.656541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.656550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.656567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.656577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.759271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.759315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.759323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.759342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.759376 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.800284 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead" exitCode=0 Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.800366 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.819595 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.824869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.824917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.824930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.824951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.824970 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.846166 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.846270 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.850736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.850774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.850785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.850802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.850814 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.868678 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.871753 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.874274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.874370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.874385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.874453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.874470 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.888309 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.903965 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.909070 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.912638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.912679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.912717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.912738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.912751 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.931311 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.932654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.936303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.936329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.936338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.936374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.936390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.947386 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.951156 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: E0130 15:56:33.951277 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.953105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.953133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.953142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.953159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.953169 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:33Z","lastTransitionTime":"2026-01-30T15:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.965419 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:33 crc kubenswrapper[4740]: I0130 15:56:33.997123 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.022696 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.038981 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.056305 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.057693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.057741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.057752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.057770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.057782 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.070649 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.098222 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.160698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.160739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.160749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.160768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.160779 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.186591 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:22:48.008357503 +0000 UTC Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.263017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.263060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.263069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.263090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.263103 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.335372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:34 crc kubenswrapper[4740]: E0130 15:56:34.335539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.365719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.365761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.365769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.365785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.365796 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.469814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.470259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.470279 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.470307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.470325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.572689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.572750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.572764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.572788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.572934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.675432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.675491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.675506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.675533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.675552 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.783665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.783731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.783748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.783780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.783804 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.807987 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.808336 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.808422 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.812364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerStarted","Data":"91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.830907 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.834054 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.847292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.867434 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.882888 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.886603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.886645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.886656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.886673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.886684 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.900061 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.916995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.921333 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.941155 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.954835 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.967962 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.993977 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:34Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.996255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.996303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.996317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.996338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:34 crc kubenswrapper[4740]: I0130 15:56:34.996953 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:34Z","lastTransitionTime":"2026-01-30T15:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.008051 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.021039 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.035091 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.050309 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.064742 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.080944 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.097383 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.101028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.101108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.101157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.101184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.101222 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.116833 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.130363 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.140484 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.151163 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.166602 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.177095 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.186710 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:16:21.947895417 +0000 UTC Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.193267 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.203856 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.203897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.203911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.203948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.203959 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.216396 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.230005 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.243430 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.253677 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.265426 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.306947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.306982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.306991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.307006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.307016 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.335265 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.335264 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:35 crc kubenswrapper[4740]: E0130 15:56:35.335425 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:35 crc kubenswrapper[4740]: E0130 15:56:35.335471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.409192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.409223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.409232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.409248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.409257 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.511648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.511719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.511736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.511762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.511781 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.613830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.613879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.613900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.613920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.613934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.716797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.716861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.716872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.716893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.716906 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.828320 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.828546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.828565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.828596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.828617 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.833472 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b" exitCode=0 Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.833601 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.851840 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.873896 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.893923 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.908977 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.930290 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.931931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.931985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.932004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.932031 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.932050 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:35Z","lastTransitionTime":"2026-01-30T15:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.945543 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.961373 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.976630 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:35 crc kubenswrapper[4740]: I0130 15:56:35.988590 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.004802 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.029863 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.037096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.037162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.037174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.037196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.037209 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.041300 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.054427 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.087756 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.145368 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.145410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.145419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.145444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.145454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.187479 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:33:35.300973319 +0000 UTC Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.248588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.248651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.248666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.248692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.248710 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.334913 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:36 crc kubenswrapper[4740]: E0130 15:56:36.335096 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.351122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.351164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.351175 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.351191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.351205 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.453935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.453999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.454019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.454043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.454060 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.557494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.557569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.557588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.557618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.557638 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.660469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.660554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.660574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.660620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.660642 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.764712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.764834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.764855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.764923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.764944 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.868450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.868519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.868535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.868561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.868577 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.971821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.971863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.971874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.971893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:36 crc kubenswrapper[4740]: I0130 15:56:36.971904 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:36Z","lastTransitionTime":"2026-01-30T15:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.074951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.075014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.075024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.075048 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.075060 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.172907 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq"] Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.173530 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.177583 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.178006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.178070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.178084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.178107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.178122 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.180527 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.187752 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:26:49.884858702 +0000 UTC Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.199475 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.218667 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.225635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.225739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.225772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr4p\" (UniqueName: \"kubernetes.io/projected/36df7a4d-789b-4344-83ca-02e0c62f0fd2-kube-api-access-mgr4p\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.225796 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.235071 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.253802 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.269842 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.281340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.281421 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.281431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.281452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.281467 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.282728 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.296543 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.312237 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.326731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.326800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr4p\" (UniqueName: \"kubernetes.io/projected/36df7a4d-789b-4344-83ca-02e0c62f0fd2-kube-api-access-mgr4p\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.326838 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.326877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.327941 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.328239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.329078 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.334445 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.334511 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:37 crc kubenswrapper[4740]: E0130 15:56:37.334554 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:37 crc kubenswrapper[4740]: E0130 15:56:37.334693 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.341000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36df7a4d-789b-4344-83ca-02e0c62f0fd2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.350074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr4p\" (UniqueName: \"kubernetes.io/projected/36df7a4d-789b-4344-83ca-02e0c62f0fd2-kube-api-access-mgr4p\") pod \"ovnkube-control-plane-749d76644c-6vzkq\" (UID: \"36df7a4d-789b-4344-83ca-02e0c62f0fd2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.350476 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.368967 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.383922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.384121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.384216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.384309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.384459 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.394574 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.421055 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.436371 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.453493 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.487902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.487965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.487980 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.488006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.488021 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.488568 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.591232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.591275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.591287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.591305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.591318 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.693643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.693678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.693690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.693709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.693720 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.796016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.796269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.796278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.796295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.796306 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.843838 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" event={"ID":"36df7a4d-789b-4344-83ca-02e0c62f0fd2","Type":"ContainerStarted","Data":"a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.843922 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" event={"ID":"36df7a4d-789b-4344-83ca-02e0c62f0fd2","Type":"ContainerStarted","Data":"a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.843949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" event={"ID":"36df7a4d-789b-4344-83ca-02e0c62f0fd2","Type":"ContainerStarted","Data":"fb54cd62900222f0a1647a4473f4b065606451700d65ce1f06b8f9a5b4081065"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.850641 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ece215f-ed67-4d10-8e39-85d49a052d52" containerID="68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997" exitCode=0 Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.850688 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerDied","Data":"68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.861329 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.874902 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.888183 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.901463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.901515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.901530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.901557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.901576 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:37Z","lastTransitionTime":"2026-01-30T15:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.903806 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.911856 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-krvcv"] Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.912543 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:37 crc kubenswrapper[4740]: E0130 15:56:37.912635 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.935743 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.952722 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.969555 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.982161 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:37 crc kubenswrapper[4740]: I0130 15:56:37.997891 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:37Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.009616 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.009693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.009708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.009726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.009735 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.018148 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.033405 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.035874 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnt2\" (UniqueName: \"kubernetes.io/projected/7f93a9ce-6677-48e3-9476-c37aa40b6347-kube-api-access-zxnt2\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.035970 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.047510 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.062325 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.084267 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.098685 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.109989 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.112017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.112069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.112079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.112099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.112110 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.121068 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.130150 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.136707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxnt2\" (UniqueName: \"kubernetes.io/projected/7f93a9ce-6677-48e3-9476-c37aa40b6347-kube-api-access-zxnt2\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.136783 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: E0130 15:56:38.136931 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:38 crc kubenswrapper[4740]: E0130 15:56:38.136994 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:38.636972275 +0000 UTC m=+47.274034874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.142420 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.159248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxnt2\" (UniqueName: \"kubernetes.io/projected/7f93a9ce-6677-48e3-9476-c37aa40b6347-kube-api-access-zxnt2\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.164232 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.176012 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.189896 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:26:15.278838414 +0000 UTC Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.191887 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.207424 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.214509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.214552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.214567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.214586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.214595 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.223087 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.236562 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.248408 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.260183 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.271394 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.282184 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.291859 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.305650 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.317483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.317534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.317546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.317565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.317578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.334545 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:38 crc kubenswrapper[4740]: E0130 15:56:38.334670 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.421717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.421779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.421794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.421816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.421829 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.525149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.525205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.525218 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.525236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.525250 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.633274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.633321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.633335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.633373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.633386 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.643206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:38 crc kubenswrapper[4740]: E0130 15:56:38.643377 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:38 crc kubenswrapper[4740]: E0130 15:56:38.643572 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:39.643532935 +0000 UTC m=+48.280595674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.736855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.736915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.736940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.736972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.736996 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.839741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.839798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.839814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.839840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.839856 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.858724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" event={"ID":"5ece215f-ed67-4d10-8e39-85d49a052d52","Type":"ContainerStarted","Data":"2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.878655 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.894168 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.906090 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.922315 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.939807 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.942580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.942622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.942635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.942662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.942677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:38Z","lastTransitionTime":"2026-01-30T15:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.972269 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.984619 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:38 crc kubenswrapper[4740]: I0130 15:56:38.997567 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:38Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.010246 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.024321 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.038049 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.045468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.045507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.045519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.045537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.045548 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.051140 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.063884 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.078910 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.090116 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.101579 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148198 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148614 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.148861 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:56:55.148821452 +0000 UTC m=+63.785884061 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.148962 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.149014 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149058 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149123 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:55.149110599 +0000 UTC m=+63.786173198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.149054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149151 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149198 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149208 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149240 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:55.149217122 +0000 UTC m=+63.786279741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149244 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149270 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149215 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149307 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:55.149298404 +0000 UTC m=+63.786361023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149320 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.149400 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:55.149390606 +0000 UTC m=+63.786453415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.190679 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:32:30.666927074 +0000 UTC Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.250948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.250990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.251020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.251036 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.251047 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.334629 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.334716 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.334805 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.334813 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.334930 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.335106 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.354757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.354812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.354825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.355250 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.355312 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.458373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.458420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.458430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.458449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.458460 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.561258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.561691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.561700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.561717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.561727 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.653786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.653984 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: E0130 15:56:39.654056 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:41.654035608 +0000 UTC m=+50.291098207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.664173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.664210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.664219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.664235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.664245 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.767342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.767424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.767437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.767457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.767470 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.864934 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/0.log" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.867734 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc" exitCode=1 Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.867780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.868567 4740 scope.go:117] "RemoveContainer" containerID="1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.869278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.869333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.869346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.869403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.869417 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.884043 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.903380 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.925590 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.941269 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.954592 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.965774 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.972029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.972062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.972078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.972097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.972109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:39Z","lastTransitionTime":"2026-01-30T15:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.979932 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:39 crc kubenswrapper[4740]: I0130 15:56:39.992010 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.001659 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.033386 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.075039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.075086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.075097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.075118 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.075131 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.087218 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.102118 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.112324 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.133264 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.152699 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:39Z\\\",\\\"message\\\":\\\"ddLogicalPort failed for openshift-multus/network-metrics-daemon-krvcv: failed to update pod openshift-multus/network-metrics-daemon-krvcv: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z\\\\nI0130 15:56:39.444588 5982 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:39.444783 5982 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:39.444832 5982 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 15:56:39.444902 5982 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 15:56:39.444938 5982 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:39.445026 5982 factory.go:656] Stopping watch factory\\\\nI0130 15:56:39.445077 5982 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:39.445076 5982 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:39.445187 5982 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:39.445384 5982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.164447 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.177467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.177520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.177531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.177551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.177561 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.191752 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:59:29.589114694 +0000 UTC Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.280209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.280273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.280285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.280307 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.280318 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.334630 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:40 crc kubenswrapper[4740]: E0130 15:56:40.335200 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.335439 4740 scope.go:117] "RemoveContainer" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.383985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.384227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.384519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.384801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.385018 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.488775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.488987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.489241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.489271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.489289 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.592584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.592637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.592650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.592669 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.592684 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.694905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.694970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.694983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.695011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.695027 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.797900 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.797963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.797976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.798000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.798014 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.875801 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/0.log" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.879622 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.880330 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.883420 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.886916 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.887412 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.900800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.900860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.900874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.900901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.900916 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:40Z","lastTransitionTime":"2026-01-30T15:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.902916 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.916577 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.929379 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.941232 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.954690 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.972963 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:40 crc kubenswrapper[4740]: I0130 15:56:40.988335 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.003835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.003818 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:40Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.003895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.004042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.004059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.004070 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.017105 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.035644 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:39Z\\\",\\\"message\\\":\\\"ddLogicalPort failed for openshift-multus/network-metrics-daemon-krvcv: failed to update pod openshift-multus/network-metrics-daemon-krvcv: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z\\\\nI0130 15:56:39.444588 5982 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:39.444783 5982 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:39.444832 5982 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 15:56:39.444902 5982 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 15:56:39.444938 5982 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:39.445026 5982 factory.go:656] Stopping watch factory\\\\nI0130 15:56:39.445077 5982 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:39.445076 5982 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:39.445187 5982 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:39.445384 5982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.053468 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.069566 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.084470 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.103748 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.106253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.106284 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.106292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.106310 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.106321 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.117292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.131191 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.146479 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.158298 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.167998 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.183060 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.191907 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:52:52.711174081 +0000 UTC Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.201479 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.208720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.208764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.208775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.208798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.208812 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.215680 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.228733 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.244866 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.267129 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:39Z\\\",\\\"message\\\":\\\"ddLogicalPort failed for openshift-multus/network-metrics-daemon-krvcv: failed to update pod openshift-multus/network-metrics-daemon-krvcv: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z\\\\nI0130 15:56:39.444588 5982 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:39.444783 5982 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:39.444832 5982 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 15:56:39.444902 5982 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 15:56:39.444938 5982 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:39.445026 5982 factory.go:656] Stopping watch factory\\\\nI0130 15:56:39.445077 5982 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:39.445076 5982 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:39.445187 5982 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:39.445384 5982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.279102 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.293918 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.311451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.311517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.311527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.311544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.311565 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.314389 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.329517 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.335120 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.335193 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.335300 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.335322 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.335465 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.335551 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.345203 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.362583 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.378519 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.414895 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.414945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.414957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.414978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.414991 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.517757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.517809 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.517824 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.517849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.517866 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.620113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.620170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.620185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.620206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.620220 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.676055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.676296 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.676422 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:45.676391849 +0000 UTC m=+54.313454458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.722379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.722444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.722460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.722484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.722498 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.825107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.825149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.825159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.825175 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.825186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.893190 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/1.log" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.894334 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/0.log" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.898526 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc" exitCode=1 Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.898903 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.899001 4740 scope.go:117] "RemoveContainer" containerID="1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.900435 4740 scope.go:117] "RemoveContainer" containerID="a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc" Jan 30 15:56:41 crc kubenswrapper[4740]: E0130 15:56:41.900712 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.922897 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.928750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.928813 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.928853 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.928892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.928922 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:41Z","lastTransitionTime":"2026-01-30T15:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.945791 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.964922 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:41 crc kubenswrapper[4740]: I0130 15:56:41.985333 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.001378 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:41Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.018760 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.031332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.031385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.031395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.031412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.031424 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.047075 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eebeaaa3aca3aa928a53493e4efd7b7e482121f62ea1fcdce9145debc1feccc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:39Z\\\",\\\"message\\\":\\\"ddLogicalPort failed for openshift-multus/network-metrics-daemon-krvcv: failed to update pod openshift-multus/network-metrics-daemon-krvcv: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:39Z is after 2025-08-24T17:21:41Z\\\\nI0130 15:56:39.444588 5982 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:39.444783 5982 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:39.444832 5982 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 15:56:39.444902 5982 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 15:56:39.444938 5982 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:39.445026 5982 factory.go:656] Stopping watch factory\\\\nI0130 15:56:39.445077 5982 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:39.445076 5982 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:39.445187 5982 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:39.445384 5982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.062900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.079593 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.099434 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.114992 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.134808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.134849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.134861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.134878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.134891 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.135488 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.150515 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.168798 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.187595 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.192663 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:54:57.805855775 +0000 UTC Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.202978 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.239033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.239087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.239108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.239136 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.239155 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.335395 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:42 crc kubenswrapper[4740]: E0130 15:56:42.335670 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.342076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.342108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.342116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.342134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.342144 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.444880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.444982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.445001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.445028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.445048 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.547862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.547936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.547956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.547983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.548004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.651180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.651252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.651273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.651300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.651319 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.755847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.755916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.755936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.755962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.755981 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.859903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.859966 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.859983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.860014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.860035 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.905847 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/1.log" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.911612 4740 scope.go:117] "RemoveContainer" containerID="a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc" Jan 30 15:56:42 crc kubenswrapper[4740]: E0130 15:56:42.911961 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.935435 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.954922 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.963868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.963948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.963973 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.964012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.964039 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:42Z","lastTransitionTime":"2026-01-30T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.973931 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:42 crc kubenswrapper[4740]: I0130 15:56:42.998686 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:42Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.017335 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.034180 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.048587 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.067679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.067751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.067766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.067790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.067809 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.070690 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.101572 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.117317 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.135214 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.154506 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.171189 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.171222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.171232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.171248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.171258 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.175863 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.194514 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:05:03.969358366 +0000 UTC Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.194887 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.210646 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.225819 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.275260 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.275336 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.275390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.275428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.275450 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.334882 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.334924 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:43 crc kubenswrapper[4740]: E0130 15:56:43.335072 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.335178 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:43 crc kubenswrapper[4740]: E0130 15:56:43.335255 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:43 crc kubenswrapper[4740]: E0130 15:56:43.335457 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.351511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.368550 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.379448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.379533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.379559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.379597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.379624 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.386045 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.408197 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.425498 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.441782 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.467796 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.483838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.483914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.483927 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.483950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.483965 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.491937 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.513896 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.530983 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.547905 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.564267 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.579064 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.586652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.586692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.586705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.586727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.586740 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.595697 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.610089 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.623863 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.690712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.690784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.690808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.690837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.690863 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.794342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.794435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.794453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.794478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.794495 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.898675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.898727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.898738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.898759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:43 crc kubenswrapper[4740]: I0130 15:56:43.898770 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:43Z","lastTransitionTime":"2026-01-30T15:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.002156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.002440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.002452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.002471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.002482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.105929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.106001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.106021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.106049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.106068 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.194794 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:33:56.154216718 +0000 UTC Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.208904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.208950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.208960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.208980 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.208995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.312087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.312137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.312147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.312173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.312185 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.334979 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.335167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.348254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.348299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.348309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.348330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.348342 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.369626 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:44Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.374899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.374949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.374960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.374981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.374995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.397778 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:44Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.402949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.403011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.403026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.403049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.403065 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.427054 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:44Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.440593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.440660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.440677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.440705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.440725 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.459896 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:44Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.468194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.468283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.468311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.468345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.468403 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.489443 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:44Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:44 crc kubenswrapper[4740]: E0130 15:56:44.489569 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.492491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.492556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.492577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.492603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.492622 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.598614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.599079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.599089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.599109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.599120 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.701807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.701880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.701904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.701934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.701958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.805331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.805433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.805452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.805480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.805501 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.908694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.908761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.908778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.908800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:44 crc kubenswrapper[4740]: I0130 15:56:44.908817 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:44Z","lastTransitionTime":"2026-01-30T15:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.012768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.012836 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.012857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.012889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.012906 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.116144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.116274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.116292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.116324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.116345 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.195561 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:31:38.813049363 +0000 UTC Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.218985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.219062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.219084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.219129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.219151 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.322800 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.322886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.322908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.322948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.322976 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.335537 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.335561 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:45 crc kubenswrapper[4740]: E0130 15:56:45.335733 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:45 crc kubenswrapper[4740]: E0130 15:56:45.335905 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.336129 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:45 crc kubenswrapper[4740]: E0130 15:56:45.336295 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.426149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.426476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.426492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.426530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.426543 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.531141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.531238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.531261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.531303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.531328 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.634149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.634738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.634816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.634915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.634989 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.734237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:45 crc kubenswrapper[4740]: E0130 15:56:45.734430 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:45 crc kubenswrapper[4740]: E0130 15:56:45.734539 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:56:53.734512546 +0000 UTC m=+62.371575145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.738394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.738430 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.738442 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.738466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.738482 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.842100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.842164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.842182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.842211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.842231 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.945885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.945964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.946172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.946206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:45 crc kubenswrapper[4740]: I0130 15:56:45.946237 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:45Z","lastTransitionTime":"2026-01-30T15:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.050619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.050673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.050684 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.050703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.050714 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.155006 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.155054 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.155064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.155087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.155098 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.196538 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:25:51.407766116 +0000 UTC Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.258726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.258796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.258822 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.258857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.258881 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.334603 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:46 crc kubenswrapper[4740]: E0130 15:56:46.334744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.362734 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.362791 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.362799 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.362819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.362830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.466926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.467794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.468018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.468200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.468404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.572485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.572541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.572550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.572570 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.572582 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.675812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.675889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.675909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.675938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.675960 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.779571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.779659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.779679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.780183 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.780240 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.883223 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.883305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.883326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.883399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.883427 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.986887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.987263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.987490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.987646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:46 crc kubenswrapper[4740]: I0130 15:56:46.987780 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:46Z","lastTransitionTime":"2026-01-30T15:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.092460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.092523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.092541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.092571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.092610 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196535 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.196660 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:49:06.520245011 +0000 UTC Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.299411 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.299757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.299879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.299975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.300107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.335417 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.335527 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:47 crc kubenswrapper[4740]: E0130 15:56:47.335649 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:47 crc kubenswrapper[4740]: E0130 15:56:47.335757 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.336011 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:47 crc kubenswrapper[4740]: E0130 15:56:47.336328 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.404014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.404057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.404066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.404084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.404095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.510089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.510228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.510257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.510331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.510397 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.614250 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.614337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.614405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.614445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.614470 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.717000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.717070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.717089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.717118 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.717137 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.820894 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.821316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.821548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.821750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.821903 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.925579 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.925628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.925643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.925662 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:47 crc kubenswrapper[4740]: I0130 15:56:47.925677 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:47Z","lastTransitionTime":"2026-01-30T15:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.028450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.028518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.028537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.028565 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.028589 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.132491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.132545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.132557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.132576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.132586 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.158303 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.170549 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.178110 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.196582 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.196971 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:32:52.267605902 +0000 UTC Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.212670 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.234779 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.236844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.236896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.236908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.236931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.236944 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.261799 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.287458 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.315418 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.334816 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:48 crc kubenswrapper[4740]: E0130 15:56:48.334987 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.340034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.340086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.340096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.340118 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.340129 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.344612 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.377485 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.391685 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.406419 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.420166 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.443062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.443120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.443134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.443158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.443174 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.448285 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.469060 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.483845 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.495747 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:48Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.546784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.546838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.546851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.546873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.546888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.649772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.649849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.649873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.649913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.649934 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.753243 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.753303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.753315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.753338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.753380 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.856771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.856857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.856876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.856906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.856926 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.959677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.959765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.959792 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.959826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:48 crc kubenswrapper[4740]: I0130 15:56:48.959851 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:48Z","lastTransitionTime":"2026-01-30T15:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.063160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.063206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.063219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.063239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.063250 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.167057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.167120 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.167140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.167167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.167186 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.198042 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:18:38.80296181 +0000 UTC Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.270700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.270763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.270782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.270811 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.270831 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.334745 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.334909 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:49 crc kubenswrapper[4740]: E0130 15:56:49.334959 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.335006 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:49 crc kubenswrapper[4740]: E0130 15:56:49.335150 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:49 crc kubenswrapper[4740]: E0130 15:56:49.335310 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.374603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.374693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.374716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.374789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.374816 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.478638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.478721 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.478740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.478774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.478794 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.581858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.581923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.581940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.581964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.581982 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.685925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.685984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.685997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.686019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.686033 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.789079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.789133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.789149 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.789169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.789181 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.892972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.893035 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.893053 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.893077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.893094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.996574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.996656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.996676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.996710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:49 crc kubenswrapper[4740]: I0130 15:56:49.996730 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:49Z","lastTransitionTime":"2026-01-30T15:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.101012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.101070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.101082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.101105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.101118 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.198356 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:11:51.319765082 +0000 UTC Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.203188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.203259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.203277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.203309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.203327 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.307127 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.307188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.307207 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.307231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.307249 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.334721 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:50 crc kubenswrapper[4740]: E0130 15:56:50.334972 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.411146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.411212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.411232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.411259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.411280 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.514581 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.514654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.514677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.514704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.514723 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.617713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.617789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.617811 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.617838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.617859 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.721692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.721759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.721780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.721812 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.721831 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.824382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.824459 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.824481 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.824506 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.824525 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.927593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.927650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.927665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.927685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:50 crc kubenswrapper[4740]: I0130 15:56:50.927698 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:50Z","lastTransitionTime":"2026-01-30T15:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.030464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.030520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.030539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.030568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.030587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.126689 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.133407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.133454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.133465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.133486 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.133499 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.152708 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.172724 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.190911 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.199058 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:02:35.221724848 +0000 UTC Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.209939 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.223881 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.236559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.236599 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.236614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.236729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.236747 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.242993 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.261105 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.280300 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.295462 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.310527 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.328248 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.335331 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.335377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.335568 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:51 crc kubenswrapper[4740]: E0130 15:56:51.335686 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:51 crc kubenswrapper[4740]: E0130 15:56:51.335872 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:51 crc kubenswrapper[4740]: E0130 15:56:51.336086 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.339772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.339808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.339823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.339843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.339858 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.356759 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.375038 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.394137 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.408864 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.420520 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.432043 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.442871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.442938 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.442950 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.442969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.442979 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.546463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.546530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.546544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.546560 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.546571 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.650600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.650673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.650689 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.650716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.650731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.753925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.753984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.753994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.754013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.754026 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.857047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.857123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.857142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.857172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.857191 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.959999 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.960101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.960114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.960132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:51 crc kubenswrapper[4740]: I0130 15:56:51.960146 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:51Z","lastTransitionTime":"2026-01-30T15:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.064245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.064291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.064305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.064325 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.064338 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.167167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.167241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.167260 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.167282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.167294 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.199687 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:51:44.093561114 +0000 UTC Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.270787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.270835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.270845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.270867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.270880 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.334557 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:52 crc kubenswrapper[4740]: E0130 15:56:52.334849 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.374495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.374558 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.374574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.374597 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.374616 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.477058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.477144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.477160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.477186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.477206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.580057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.580140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.580158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.580187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.580206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.683487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.683539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.683554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.683579 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.683592 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.787375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.787435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.787470 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.787498 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.787514 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.896280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.896362 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.896375 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.896399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.896412 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.999010 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.999413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.999566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.999688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:52 crc kubenswrapper[4740]: I0130 15:56:52.999860 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:52Z","lastTransitionTime":"2026-01-30T15:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.103008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.103444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.103535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.103641 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.103734 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.199826 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:30:46.104214613 +0000 UTC Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.207393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.207462 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.207480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.207508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.207527 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.310182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.310266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.310285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.310316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.310337 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.334936 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.334945 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.335104 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:53 crc kubenswrapper[4740]: E0130 15:56:53.335309 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:53 crc kubenswrapper[4740]: E0130 15:56:53.335521 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:53 crc kubenswrapper[4740]: E0130 15:56:53.335683 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.353117 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.367806 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.387633 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.406713 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.413523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.413623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.413774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.413836 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.413867 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.430741 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.447343 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.467000 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.485131 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.505496 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.517808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.517857 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.517867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.517883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.517893 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.527308 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.545738 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.562074 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.578642 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.591956 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.603475 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.617451 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.621067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.621125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.621146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.621176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.621200 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.636931 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.724457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.724505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.724515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.724533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.724545 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.747572 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:53 crc kubenswrapper[4740]: E0130 15:56:53.747913 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:53 crc kubenswrapper[4740]: E0130 15:56:53.748092 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:09.748060657 +0000 UTC m=+78.385123266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.828094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.828158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.828174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.828196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.828210 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.930817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.930885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.930899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.930925 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:53 crc kubenswrapper[4740]: I0130 15:56:53.930939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:53Z","lastTransitionTime":"2026-01-30T15:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.035115 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.035191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.035203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.035224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.035236 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.138052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.138107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.138119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.138151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.138167 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.200539 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:22:48.460666968 +0000 UTC Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.240755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.240830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.240848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.240876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.240894 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.334619 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.334831 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.344585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.344643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.344656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.344676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.344690 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.448238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.448444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.448472 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.448512 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.448540 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.551304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.551390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.551408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.551475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.551497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.591499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.591571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.591583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.591606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.591618 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.607933 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.612146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.612197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.612211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.612235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.612250 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.627445 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.632471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.632516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.632528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.632548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.632561 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.647607 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.654062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.654169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.654231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.654296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.654325 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.678705 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.684772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.684825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.684842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.684868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.684887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.702484 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:54 crc kubenswrapper[4740]: E0130 15:56:54.702735 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.705369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.705406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.705420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.705441 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.705456 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.809309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.809448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.809476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.809543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.809572 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.912573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.912613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.912624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.912644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:54 crc kubenswrapper[4740]: I0130 15:56:54.912658 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:54Z","lastTransitionTime":"2026-01-30T15:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.015337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.015428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.015444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.015469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.015491 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.119206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.119281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.119300 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.119330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.119382 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.167619 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.167784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.167831 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:57:27.167792852 +0000 UTC m=+95.804855461 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.167868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.167889 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.167923 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.167937 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:27.167926995 +0000 UTC m=+95.804989594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.167957 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168114 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168132 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168146 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168174 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168184 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:27.168169451 +0000 UTC m=+95.805232060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168210 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:27.168199252 +0000 UTC m=+95.805261851 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168262 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168282 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168295 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.168333 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:27.168320855 +0000 UTC m=+95.805383474 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.201478 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 12:46:03.437291925 +0000 UTC Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.221492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.221529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.221539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.221557 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.221567 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.324928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.324971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.324987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.325013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.325030 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.335345 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.335521 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.336719 4740 scope.go:117] "RemoveContainer" containerID="a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.337121 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.337207 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.337109 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:55 crc kubenswrapper[4740]: E0130 15:56:55.337286 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.427589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.427627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.427636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.427654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.427665 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.530426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.530465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.530474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.530497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.530506 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.634252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.634305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.634322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.634370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.634390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.738101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.738166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.738181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.738205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.738222 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.841871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.841946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.841967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.841998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.842029 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.945604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.945654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.945674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.945695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.945707 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:55Z","lastTransitionTime":"2026-01-30T15:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.965256 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/1.log" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.968739 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2"} Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.969393 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:56:55 crc kubenswrapper[4740]: I0130 15:56:55.995169 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.016135 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.039430 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.048710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.048770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.048781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.048833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.048849 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.066801 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.101841 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.128654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.148688 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.152443 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.152478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.152490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.152510 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.152523 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.187834 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.201644 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:09:02.100500959 +0000 UTC Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.214687 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.236647 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.254723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.255155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.255272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.255379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.255452 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.258192 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.269895 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.282620 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.299010 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.311475 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.320921 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.330504 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.334605 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:56 crc kubenswrapper[4740]: E0130 15:56:56.334798 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.358289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.358342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.358382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.358403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.358415 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.461549 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.461593 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.461606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.461623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.461635 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.564591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.564655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.564668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.564691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.564703 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.668007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.668064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.668079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.668101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.668116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.771521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.771887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.771986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.772109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.772198 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.876648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.876724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.876740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.876766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.876783 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.979552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.979615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.979630 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.979655 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:56 crc kubenswrapper[4740]: I0130 15:56:56.979673 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:56Z","lastTransitionTime":"2026-01-30T15:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.083173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.084137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.084295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.084480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.084620 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.187101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.187518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.187604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.187691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.187754 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.202672 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:00:34.217572669 +0000 UTC Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.290614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.290678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.290693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.290714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.290728 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.334596 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.334663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.334662 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:57 crc kubenswrapper[4740]: E0130 15:56:57.334822 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:57 crc kubenswrapper[4740]: E0130 15:56:57.334929 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:57 crc kubenswrapper[4740]: E0130 15:56:57.335027 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.393062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.393135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.393154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.393188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.393206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.496965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.497017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.497030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.497049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.497065 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.599825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.599888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.599913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.599942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.599961 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.703566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.703614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.703633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.703652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.703664 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.807693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.807756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.807768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.807788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.807801 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.911110 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.911191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.911210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.911235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.911249 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:57Z","lastTransitionTime":"2026-01-30T15:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.979343 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/2.log" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.980687 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/1.log" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.984734 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2" exitCode=1 Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.984806 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2"} Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.984863 4740 scope.go:117] "RemoveContainer" containerID="a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc" Jan 30 15:56:57 crc kubenswrapper[4740]: I0130 15:56:57.986111 4740 scope.go:117] "RemoveContainer" containerID="06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2" Jan 30 15:56:57 crc kubenswrapper[4740]: E0130 15:56:57.986468 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.002812 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.014262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.014301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.014311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.014329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.014342 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.022165 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.037748 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.053232 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.072501 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.086425 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.099886 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.113925 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.117792 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.117864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.117880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.118269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.118312 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.135597 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.161560 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.177208 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.197397 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.203813 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:47:19.573036262 +0000 UTC Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.216191 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.221418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.221500 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.221517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.221542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.221555 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.235951 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.251233 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.265719 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.285996 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.324928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.324974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.324987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.325004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.325015 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.334598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:56:58 crc kubenswrapper[4740]: E0130 15:56:58.334808 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.428077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.428161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.428182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.428215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.428240 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.530614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.530694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.530720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.530752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.530775 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.633495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.633573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.633634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.633665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.633689 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.736795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.736848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.736858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.736880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.736896 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.840049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.840105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.840114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.840136 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.840147 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.943590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.943646 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.943656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.943675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.943685 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:58Z","lastTransitionTime":"2026-01-30T15:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:58 crc kubenswrapper[4740]: I0130 15:56:58.990524 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/2.log" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.046433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.046473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.046482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.046499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.046510 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.149106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.149165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.149180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.149214 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.149233 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.204514 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:14:31.146243521 +0000 UTC Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.252001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.252046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.252058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.252095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.252107 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.335405 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.335517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:56:59 crc kubenswrapper[4740]: E0130 15:56:59.335689 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.335732 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:56:59 crc kubenswrapper[4740]: E0130 15:56:59.335921 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:56:59 crc kubenswrapper[4740]: E0130 15:56:59.336075 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.355258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.355334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.355386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.355414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.355433 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.457993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.458047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.458060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.458077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.458088 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.560941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.560997 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.561009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.561028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.561041 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.663716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.663778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.663795 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.663821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.663839 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.766837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.766898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.766912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.766941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.766956 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.869604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.869647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.869656 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.869671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.869686 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.972587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.972623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.972633 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.972648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:56:59 crc kubenswrapper[4740]: I0130 15:56:59.972658 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:56:59Z","lastTransitionTime":"2026-01-30T15:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.075384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.075437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.075448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.075467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.075479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.178504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.178556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.178569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.178588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.178604 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.205544 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:42:07.251283084 +0000 UTC Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.281113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.281145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.281153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.281168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.281178 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.335311 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:00 crc kubenswrapper[4740]: E0130 15:57:00.335487 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.383132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.383159 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.383169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.383181 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.383192 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.485501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.485831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.485942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.486040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.486104 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.588566 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.588611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.588623 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.588642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.588653 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.691020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.691060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.691070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.691085 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.691095 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.793289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.793376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.793390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.793418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.793432 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.897659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.897741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.897768 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.897806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:00 crc kubenswrapper[4740]: I0130 15:57:00.897826 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:00Z","lastTransitionTime":"2026-01-30T15:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:00.999797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.000119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.000131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.000150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.000165 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.102751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.102796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.102804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.102821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.102832 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205467 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205517 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.205798 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:59:04.421976262 +0000 UTC Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.308696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.308732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.308743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.308760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.308772 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.336832 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:01 crc kubenswrapper[4740]: E0130 15:57:01.336957 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.337131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:01 crc kubenswrapper[4740]: E0130 15:57:01.337181 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.337510 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:01 crc kubenswrapper[4740]: E0130 15:57:01.337557 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.411543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.411584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.411595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.411612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.411622 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.514330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.514685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.514752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.514817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.514893 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.617291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.617339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.617369 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.617386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.617398 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.720088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.720134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.720145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.720161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.720172 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.822148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.822194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.822206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.822225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.822238 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.924525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.924571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.924584 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.924601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:01 crc kubenswrapper[4740]: I0130 15:57:01.924612 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:01Z","lastTransitionTime":"2026-01-30T15:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.026901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.026965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.026984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.027008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.027026 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.129749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.129806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.129820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.129841 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.129856 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.206504 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:57:38.431496377 +0000 UTC Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.232709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.232755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.232764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.232790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.232800 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.334279 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:02 crc kubenswrapper[4740]: E0130 15:57:02.334485 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.335724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.335767 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.335781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.335817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.335832 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.438707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.438763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.438774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.438802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.438813 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.541187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.541251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.541268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.541299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.541333 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.643671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.643741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.643762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.643796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.643834 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.746169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.746212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.746221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.746237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.746249 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.848995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.849406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.849446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.849482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.849511 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.952449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.952514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.952539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.952571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:02 crc kubenswrapper[4740]: I0130 15:57:02.952594 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:02Z","lastTransitionTime":"2026-01-30T15:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.056281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.056376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.056394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.056419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.056436 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.159079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.159129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.159141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.159161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.159174 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.207721 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:12:37.015019684 +0000 UTC Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.262343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.263260 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.263373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.263473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.263557 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.334980 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.335021 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.335084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:03 crc kubenswrapper[4740]: E0130 15:57:03.335201 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:03 crc kubenswrapper[4740]: E0130 15:57:03.335275 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:03 crc kubenswrapper[4740]: E0130 15:57:03.335341 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.358462 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74626826f4d8551356372a8bd0239f05668a84536f68f2a12a070b3fa3bc9cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:41Z\\\",\\\"message\\\":\\\"0 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 15:56:41.102399 6210 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 15:56:41.102405 6210 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 15:56:41.102414 6210 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 15:56:41.102420 6210 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 15:56:41.102439 6210 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 15:56:41.102499 6210 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 15:56:41.102512 6210 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 15:56:41.102549 6210 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 15:56:41.102564 6210 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 15:56:41.102633 6210 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 15:56:41.102910 6210 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0130 15:56:41.103015 6210 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0130 15:56:41.105089 6210 factory.go:656] Stopping watch factory\\\\nI0130 15:56:41.105152 6210 ovnkube.go:599] Stopped ovnkube\\\\nI0130 15:56:41.105253 6210 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 15:56:41.105379 6210 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.365399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.365441 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.365452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.365476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.365491 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.368507 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.385103 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.397945 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.409784 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.422081 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.442247 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.455031 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.468156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.468208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.468222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.468243 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.468260 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.471551 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.485076 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.495524 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.508007 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.520482 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.538664 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.554673 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.567906 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.570810 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.570871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.570887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.570909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.570924 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.579763 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.672806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.672855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.672865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.672884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.672896 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.775930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.775981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.775991 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.776010 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.776021 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.878531 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.878587 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.878600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.878622 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.878640 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.981341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.981405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.981419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.981439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:03 crc kubenswrapper[4740]: I0130 15:57:03.981450 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:03Z","lastTransitionTime":"2026-01-30T15:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.086717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.086777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.086791 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.086818 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.086837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.189889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.189949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.189959 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.189977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.189988 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.208230 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:56:18.883493406 +0000 UTC Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.292636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.292737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.292764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.292803 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.292831 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.334839 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:04 crc kubenswrapper[4740]: E0130 15:57:04.335014 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.394987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.395034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.395043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.395060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.395074 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.497805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.497862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.497877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.497905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.497917 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.600151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.600208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.600220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.600240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.600254 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.703682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.703745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.703758 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.703781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.703796 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.807011 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.807075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.807087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.807104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.807115 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.909961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.910395 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.910487 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.910614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:04 crc kubenswrapper[4740]: I0130 15:57:04.910774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:04Z","lastTransitionTime":"2026-01-30T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.014432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.014516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.014541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.014574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.014598 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.093941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.094003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.094023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.094052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.094070 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.109056 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.113962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.114029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.114094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.114123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.114148 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.129388 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.134289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.134334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.134363 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.134382 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.134392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.148028 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.153611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.153647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.153657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.153675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.153690 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.171415 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.176081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.176119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.176129 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.176145 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.176155 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.190448 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.190564 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.192509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.192558 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.192571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.192590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.192604 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.209101 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:03:11.648059698 +0000 UTC Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.295660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.295710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.295723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.295743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.295757 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.335224 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.335304 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.335444 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.335663 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.335738 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:05 crc kubenswrapper[4740]: E0130 15:57:05.335906 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.398691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.398749 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.398764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.398786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.398800 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.502102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.502170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.502189 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.502222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.502241 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.605292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.605385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.605411 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.605439 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.605457 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.707624 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.707665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.707674 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.707693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.707707 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.810227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.810278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.810287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.810305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.810318 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.913513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.913838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.913919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.913996 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:05 crc kubenswrapper[4740]: I0130 15:57:05.914055 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:05Z","lastTransitionTime":"2026-01-30T15:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.016713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.016782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.016808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.016838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.016862 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.119194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.119240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.119252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.119272 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.119285 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.209995 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:32:14.941781946 +0000 UTC Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.221837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.221877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.221886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.221904 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.221914 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.324461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.324501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.324511 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.324550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.324559 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.334556 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:06 crc kubenswrapper[4740]: E0130 15:57:06.334715 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.427003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.427080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.427092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.427108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.427117 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.529819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.529862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.529872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.529888 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.529898 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.632186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.632235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.632247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.632269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.632282 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.735137 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.735191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.735206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.735227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.735239 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.837873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.837951 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.837961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.837979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.837989 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.940411 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.940476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.940488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.940507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:06 crc kubenswrapper[4740]: I0130 15:57:06.940520 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:06Z","lastTransitionTime":"2026-01-30T15:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.043427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.043469 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.043479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.043497 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.043512 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.146256 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.146306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.146319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.146341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.146386 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.211536 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:41:46.076584152 +0000 UTC Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.248802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.248854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.248869 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.248896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.248911 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.334430 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.334517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.334588 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:07 crc kubenswrapper[4740]: E0130 15:57:07.334583 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:07 crc kubenswrapper[4740]: E0130 15:57:07.334680 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:07 crc kubenswrapper[4740]: E0130 15:57:07.334784 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.351207 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.351242 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.351251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.351264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.351276 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.453915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.453972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.453988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.454013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.454030 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.556868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.556934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.556953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.556983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.557002 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.660296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.660392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.660407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.660429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.660442 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.763102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.763153 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.763164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.763184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.763199 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.866161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.866207 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.866216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.866235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.866244 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.969285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.969331 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.969341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.969378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:07 crc kubenswrapper[4740]: I0130 15:57:07.969390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:07Z","lastTransitionTime":"2026-01-30T15:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.082665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.082726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.082738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.082760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.082771 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.185953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.186007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.186019 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.186038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.186053 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.212433 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:06:12.043942177 +0000 UTC Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.289529 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.289879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.289953 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.290032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.290102 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.335262 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:08 crc kubenswrapper[4740]: E0130 15:57:08.335849 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.392456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.392492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.392501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.392516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.392525 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.495779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.495823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.495834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.495852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.495863 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.598213 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.598262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.598271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.598289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.598306 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.701113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.701175 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.701194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.701220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.701240 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.804525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.804610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.804635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.804670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.804695 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.909829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.909916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.909942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.909974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:08 crc kubenswrapper[4740]: I0130 15:57:08.910001 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:08Z","lastTransitionTime":"2026-01-30T15:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.013156 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.013222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.013236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.013263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.013277 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.118831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.118886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.118896 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.118916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.118927 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.214155 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:14:37.792888716 +0000 UTC Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.221179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.221219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.221229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.221247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.221258 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.323562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.323618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.323629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.323647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.323658 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.334974 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.334987 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:09 crc kubenswrapper[4740]: E0130 15:57:09.335134 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:09 crc kubenswrapper[4740]: E0130 15:57:09.335253 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.335080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:09 crc kubenswrapper[4740]: E0130 15:57:09.335518 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.425920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.425977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.425987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.426008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.426021 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.529055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.529098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.529109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.529127 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.529139 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.632669 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.632744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.632766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.632794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.632812 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.735659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.735729 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.735739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.735757 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.735768 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.833728 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:09 crc kubenswrapper[4740]: E0130 15:57:09.833949 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:57:09 crc kubenswrapper[4740]: E0130 15:57:09.834034 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:57:41.834010049 +0000 UTC m=+110.471072648 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.838003 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.838049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.838061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.838079 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.838092 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.940203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.940262 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.940276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.940298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:09 crc kubenswrapper[4740]: I0130 15:57:09.940309 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:09Z","lastTransitionTime":"2026-01-30T15:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.042013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.042063 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.042073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.042092 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.042105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.145969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.146072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.146099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.146138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.146163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.214640 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:55:38.216402508 +0000 UTC Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.249515 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.249571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.249586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.249606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.249621 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.334734 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:10 crc kubenswrapper[4740]: E0130 15:57:10.335027 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.336707 4740 scope.go:117] "RemoveContainer" containerID="06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2" Jan 30 15:57:10 crc kubenswrapper[4740]: E0130 15:57:10.336960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.353914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.353955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.353964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.353981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.353993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.358825 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.371066 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.387377 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.399652 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.421178 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.452187 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.458118 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.458187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.458205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.458231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.458254 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.471408 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.493360 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.514309 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.527808 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.545898 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.588577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.588649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.588665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.588692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.588709 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.591995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.614829 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.631438 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.646065 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.657465 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.669469 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.691871 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.691922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.691933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.691955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.691969 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.794947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.795002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.795012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.795033 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.795044 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.898143 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.898219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.898241 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.898277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:10 crc kubenswrapper[4740]: I0130 15:57:10.898302 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:10Z","lastTransitionTime":"2026-01-30T15:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.001152 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.001206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.001219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.001238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.001249 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.103884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.103931 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.103941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.103956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.103967 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.207061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.207112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.207126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.207146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.207160 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.215691 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:14:03.819460566 +0000 UTC Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.310105 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.310178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.310198 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.310225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.310268 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.335098 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.335225 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.335098 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:11 crc kubenswrapper[4740]: E0130 15:57:11.335445 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:11 crc kubenswrapper[4740]: E0130 15:57:11.335661 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:11 crc kubenswrapper[4740]: E0130 15:57:11.335823 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.413736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.414131 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.414463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.414631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.414772 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.518508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.519106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.519555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.520327 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.520847 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.624765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.624834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.624852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.624874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.624888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.728891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.728964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.728996 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.729024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.729040 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.832409 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.832488 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.832499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.832522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.832534 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.935248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.935308 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.935321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.935376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:11 crc kubenswrapper[4740]: I0130 15:57:11.935392 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:11Z","lastTransitionTime":"2026-01-30T15:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.038052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.038114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.038127 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.038148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.038188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.141277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.141327 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.141340 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.141378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.141390 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.216148 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:04:57.430797189 +0000 UTC Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.244029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.244065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.244077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.244095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.244105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.334922 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:12 crc kubenswrapper[4740]: E0130 15:57:12.335074 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.346877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.346916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.346928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.346945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.346956 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.450052 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.450095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.450106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.450128 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.450143 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.553130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.553199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.553215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.553238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.553252 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.655606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.655666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.655678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.655703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.655718 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.758086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.758157 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.758167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.758189 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.758201 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.860615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.860654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.860665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.860681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.860693 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.963215 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.963291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.963311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.963341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:12 crc kubenswrapper[4740]: I0130 15:57:12.963400 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:12Z","lastTransitionTime":"2026-01-30T15:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.065182 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.065225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.065239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.065258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.065272 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.167478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.167526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.167534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.167551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.167564 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.216843 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:36:35.313140751 +0000 UTC Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.270008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.270059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.270072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.270095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.270109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.335056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.335154 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:13 crc kubenswrapper[4740]: E0130 15:57:13.335221 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:13 crc kubenswrapper[4740]: E0130 15:57:13.335390 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.335472 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:13 crc kubenswrapper[4740]: E0130 15:57:13.335669 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.349388 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.360999 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.372403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.372451 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.372461 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.372476 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.372487 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.375006 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.394942 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.422490 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.437407 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.457854 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.473953 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.475590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.475626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.475637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.475657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.475669 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.489207 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.506980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.519800 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.533189 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.544864 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.559645 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.571564 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.579244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.579302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.579313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.579332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.579343 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.588017 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.606473 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:13Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.681969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.682410 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.682514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.682607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.682702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.785528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.785663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.785676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.785701 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.785716 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.888055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.888108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.888122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.888144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.888160 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.990614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.990681 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.990695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.990717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:13 crc kubenswrapper[4740]: I0130 15:57:13.990731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:13Z","lastTransitionTime":"2026-01-30T15:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.093735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.093792 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.093802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.093823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.093837 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.196685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.196742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.196753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.196770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.196781 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.218166 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:58:56.036383324 +0000 UTC Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.299284 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.299332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.299343 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.299383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.299396 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.334979 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:14 crc kubenswrapper[4740]: E0130 15:57:14.335225 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.403141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.403504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.403525 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.403553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.403572 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.506945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.507046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.507065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.507096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.507116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.610057 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.610122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.610141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.610170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.610188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.713731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.714230 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.714387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.714548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.714697 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.819554 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.819645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.819667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.819696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.819717 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.923535 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.923604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.923618 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.923642 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:14 crc kubenswrapper[4740]: I0130 15:57:14.923661 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:14Z","lastTransitionTime":"2026-01-30T15:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.028072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.028123 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.028138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.028165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.028180 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.130747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.130792 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.130801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.130819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.130832 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.211867 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.211926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.211935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.211955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.211968 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.219143 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:35:13.183874819 +0000 UTC Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.228743 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:15Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.233065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.233133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.233151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.233179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.233198 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.249020 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:15Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.253744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.253806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.253824 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.253853 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.253879 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.271573 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:15Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.277134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.277177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.277186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.277203 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.277214 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.294007 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:15Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.298555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.298591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.298604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.298621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.298634 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.314400 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:15Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.314638 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.317163 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.317231 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.317250 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.317280 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.317297 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.334791 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.334860 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.334886 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.334950 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.335124 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:15 crc kubenswrapper[4740]: E0130 15:57:15.335300 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.421449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.421501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.421513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.421534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.421551 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.525078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.525670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.525914 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.526090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.526265 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.629745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.629791 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.629804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.629825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.629839 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.735319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.735387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.735400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.735420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.735435 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.838486 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.838825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.838906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.838968 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.839026 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.942032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.942103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.942130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.942160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:15 crc kubenswrapper[4740]: I0130 15:57:15.942179 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:15Z","lastTransitionTime":"2026-01-30T15:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.051673 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.052029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.052101 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.052172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.052238 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.155173 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.155521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.155605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.155683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.155740 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.219988 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:02:21.336078828 +0000 UTC Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.258654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.258702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.258714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.258737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.258751 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.335256 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:16 crc kubenswrapper[4740]: E0130 15:57:16.335621 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.353186 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.361755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.361796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.361808 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.361825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.361838 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.464957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.465002 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.465013 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.465038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.465052 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.568186 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.568686 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.568843 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.569000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.569150 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.673197 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.673678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.673834 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.673994 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.674131 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.777957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.778023 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.778037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.778063 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.778077 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.881450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.881542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.881567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.881602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.881625 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.984440 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.984846 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.984985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.985164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:16 crc kubenswrapper[4740]: I0130 15:57:16.985442 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:16Z","lastTransitionTime":"2026-01-30T15:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.091252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.091338 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.091418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.091452 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.091476 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.194634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.194705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.194722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.194748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.194766 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.220179 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:14:30.562965431 +0000 UTC Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.298221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.298265 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.298274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.298293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.298305 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.335180 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:17 crc kubenswrapper[4740]: E0130 15:57:17.335414 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.335636 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.335647 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:17 crc kubenswrapper[4740]: E0130 15:57:17.335745 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:17 crc kubenswrapper[4740]: E0130 15:57:17.335879 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.400983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.401043 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.401062 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.401088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.401105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.504890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.504967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.504987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.505021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.505042 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.607949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.607988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.607998 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.608014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.608026 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.711269 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.711344 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.711404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.711437 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.711461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.815713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.815783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.815801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.815827 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.815852 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.918819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.919252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.919474 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.919671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:17 crc kubenswrapper[4740]: I0130 15:57:17.919879 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:17Z","lastTransitionTime":"2026-01-30T15:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.024027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.024518 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.024667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.024788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.025063 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.061776 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/0.log" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.061865 4740 generic.go:334] "Generic (PLEG): container finished" podID="e65088cb-e700-4af1-b788-af399f918bd0" containerID="db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4" exitCode=1 Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.061920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerDied","Data":"db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.062684 4740 scope.go:117] "RemoveContainer" containerID="db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.104006 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.127341 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.130254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.130299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.131605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.131647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.131666 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.153805 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.181331 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.199483 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.219504 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.220423 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:22:58.105561175 +0000 UTC Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.234559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.234598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.234610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.234626 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.234638 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.244830 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.263424 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.284634 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.308758 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.326990 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.334793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:18 crc kubenswrapper[4740]: E0130 15:57:18.335114 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.337205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.337538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.337676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.337778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.337859 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.346511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.363224 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.378806 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.399210 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.419741 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.435388 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.441291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.441589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.441736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.441983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.442101 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.457013 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:18Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.545283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.545337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.545370 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.545388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.545400 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.649379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.649750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.649811 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.649893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.649960 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.752457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.752507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.752519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.752538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.752549 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.855342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.855415 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.855426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.855446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.855461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.958493 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.958771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.958865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.959005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:18 crc kubenswrapper[4740]: I0130 15:57:18.959105 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:18Z","lastTransitionTime":"2026-01-30T15:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.061786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.061844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.061854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.061874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.061888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.067025 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/0.log" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.067087 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerStarted","Data":"1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.083251 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.095553 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.110937 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.129621 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.151409 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.164171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.164222 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.164233 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.164250 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.164263 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.169891 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.191052 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.209516 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.221138 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:41:15.937875523 +0000 UTC Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.230238 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.241108 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.253302 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.266980 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.267615 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.267760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.267861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.267996 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.268117 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.280747 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.292124 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.302543 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.320448 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.334959 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.335012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.335069 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:19 crc kubenswrapper[4740]: E0130 15:57:19.335120 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:19 crc kubenswrapper[4740]: E0130 15:57:19.335273 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:19 crc kubenswrapper[4740]: E0130 15:57:19.335507 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.338305 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.356601 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:19Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.370815 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.370851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.370864 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.370885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.370900 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.475330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.475424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.475444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.475471 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.475490 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.579278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.579409 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.579441 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.579478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.579517 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.682550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.682625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.682637 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.682683 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.682699 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.785682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.785725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.785733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.785753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.785763 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.888373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.888431 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.888444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.888463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.888476 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.991070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.991122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.991132 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.991151 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:19 crc kubenswrapper[4740]: I0130 15:57:19.991163 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:19Z","lastTransitionTime":"2026-01-30T15:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.094393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.094479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.094491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.094509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.094521 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.197635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.197691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.197708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.197732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.197762 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.222175 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:40:48.035834462 +0000 UTC Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.300926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.300975 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.300987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.301007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.301019 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.334619 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:20 crc kubenswrapper[4740]: E0130 15:57:20.334775 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.404885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.404963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.404985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.405021 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.405046 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.508496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.508563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.508585 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.508617 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.508639 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.612139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.612208 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.612224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.612249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.612266 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.716070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.716155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.716179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.716216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.716239 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.819805 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.819849 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.819861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.819878 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.819889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.923739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.923819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.923842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.923870 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:20 crc kubenswrapper[4740]: I0130 15:57:20.923889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:20Z","lastTransitionTime":"2026-01-30T15:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.026756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.026820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.026831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.026850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.026864 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.129216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.129274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.129286 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.129306 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.129320 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.222640 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:37:48.878222382 +0000 UTC Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.231908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.231963 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.231972 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.231988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.231999 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334309 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334400 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:21 crc kubenswrapper[4740]: E0130 15:57:21.334646 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: E0130 15:57:21.334822 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: E0130 15:57:21.334916 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334930 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.334981 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.437206 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.437251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.437278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.437297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.437307 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.540014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.540091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.540111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.540144 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.540168 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.643051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.643117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.643128 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.643148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.643159 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.746323 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.746448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.746466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.746496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.746511 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.849610 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.849660 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.849672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.849690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.849706 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.952905 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.952979 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.952993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.953014 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:21 crc kubenswrapper[4740]: I0130 15:57:21.953026 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:21Z","lastTransitionTime":"2026-01-30T15:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.056096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.056154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.056167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.056188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.056581 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.159678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.159747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.159759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.159778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.159790 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.223425 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:55:39.908745472 +0000 UTC Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.262304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.262378 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.262396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.262418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.262435 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.335046 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:22 crc kubenswrapper[4740]: E0130 15:57:22.335229 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.365424 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.365502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.365519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.365546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.365566 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.469025 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.469090 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.469107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.469135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.469155 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.573078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.573148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.573160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.573179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.573196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.677017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.677068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.677078 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.677097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.677109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.780421 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.780492 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.780502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.780544 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.780560 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.883893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.884273 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.884464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.884614 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.884736 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.987499 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.987612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.987641 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.987676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:22 crc kubenswrapper[4740]: I0130 15:57:22.987742 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:22Z","lastTransitionTime":"2026-01-30T15:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.090122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.090191 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.090211 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.090240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.090263 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.193772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.194438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.194643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.194824 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.194994 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.224447 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:23:01.653268139 +0000 UTC Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.298491 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.298530 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.298541 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.298559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.298570 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.335653 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.335711 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:23 crc kubenswrapper[4740]: E0130 15:57:23.335784 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.336000 4740 scope.go:117] "RemoveContainer" containerID="06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.336090 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:23 crc kubenswrapper[4740]: E0130 15:57:23.336159 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:23 crc kubenswrapper[4740]: E0130 15:57:23.336303 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.355661 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.376673 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.395506 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.401553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.401605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.401621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.401648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.401664 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.415965 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.431099 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.446754 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.463770 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.479331 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.493995 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.504810 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.505390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.505502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.505563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.505647 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.505706 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.519063 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.531558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.547973 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.581511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.605324 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.609030 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.609084 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.609108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.609138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.609160 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.626893 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.641449 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.654894 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:23Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.712304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.712372 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.712385 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.712407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.712446 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.815802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.815884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.815912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.815945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.815968 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.923138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.923207 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.923224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.923253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:23 crc kubenswrapper[4740]: I0130 15:57:23.923273 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:23Z","lastTransitionTime":"2026-01-30T15:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.026974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.027304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.027316 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.027339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.027374 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.087880 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/2.log" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.091452 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.092505 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.116041 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.131167 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.131221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.131237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.131258 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.131272 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.139843 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.153231 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.170744 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.187694 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.202819 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.214218 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.225446 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:52:10.551271611 +0000 UTC Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.227418 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.233854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.233936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.233954 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.233977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.233993 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.242612 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.258504 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.271147 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.280715 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.292715 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.303543 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.325398 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.334916 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:24 crc kubenswrapper[4740]: E0130 15:57:24.335176 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.336807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.336844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.336855 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.336875 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.336889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.343185 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.355175 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.367420 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:24Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.439387 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.439432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.439444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.439463 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.439481 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.542038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.542082 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.542093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.542109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.542124 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.645613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.645676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.645695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.645723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.645744 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.748125 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.748168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.748178 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.748194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.748206 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.851527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.851932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.852005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.852080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.852142 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.955508 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.955583 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.955601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.955628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:24 crc kubenswrapper[4740]: I0130 15:57:24.955647 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:24Z","lastTransitionTime":"2026-01-30T15:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.058581 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.058650 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.058667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.058694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.058715 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.097345 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/3.log" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.098822 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/2.log" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.102787 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" exitCode=1 Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.102853 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.102914 4740 scope.go:117] "RemoveContainer" containerID="06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.104923 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.105209 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.138634 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.160979 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.162073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.162116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.162133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.162232 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.162257 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.180786 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.200552 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.225680 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.226113 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:27:03.014937618 +0000 UTC Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.260276 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06ab2dbacbf9b52baa9459240edecca6801603391877448591ea8058661741f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:56:56Z\\\",\\\"message\\\":\\\"o run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:56:56Z is after 2025-08-24T17:21:41Z]\\\\nI0130 15:56:56.836065 6449 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.10:53: 10.217.4.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {be9dc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:24Z\\\",\\\"message\\\":\\\"5-08-24T17:21:41Z]\\\\nI0130 15:57:24.670463 6817 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7c7j6\\\\nI0130 15:57:24.670455 6817 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 15:57:24.670479 6817 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0130 15:57:24.668922 6817 services_controller.go:434] Service openshift-multus/multus-admission-controller retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{multus-admission-controller openshift-multus c9eea3b1-f918-4c62-9731-c809988317c1 4579 0 2025-02-23 05:21:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc007c8a197 0xc007c8a198}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.265111 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.265321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.265453 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.266770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.266988 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.277541 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.296227 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.312578 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.331502 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.334662 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.334661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.334920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.334828 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.335242 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.335346 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.353544 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.370066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.370162 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.370177 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.370196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.370207 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.371241 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.391856 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.412971 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.429391 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.447468 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.463209 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.472447 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.472627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.472741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.472837 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.472912 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.481195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.577755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.577885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.577907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.577941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.577980 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.593868 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.598854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.598911 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.598928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.598958 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.598977 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.614763 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.619513 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.619775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.619921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.620072 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.620196 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.644582 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.650772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.650838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.650860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.650893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.650912 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.671992 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.677921 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.678020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.678040 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.678066 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.678086 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.693435 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:25 crc kubenswrapper[4740]: E0130 15:57:25.693669 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.695983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.696070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.696100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.696141 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.696173 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.799532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.799576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.799589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.799612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.799627 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.903630 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.903711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.903747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.903780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:25 crc kubenswrapper[4740]: I0130 15:57:25.903804 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:25Z","lastTransitionTime":"2026-01-30T15:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.007217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.007391 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.007427 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.007466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.007488 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.110512 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/3.log" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.111109 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.111164 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.111183 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.111209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.111230 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.116581 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 15:57:26 crc kubenswrapper[4740]: E0130 15:57:26.116829 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.133824 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.150896 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.169566 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.196171 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:24Z\\\",\\\"message\\\":\\\"5-08-24T17:21:41Z]\\\\nI0130 15:57:24.670463 6817 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7c7j6\\\\nI0130 15:57:24.670455 6817 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 15:57:24.670479 6817 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0130 15:57:24.668922 6817 services_controller.go:434] Service openshift-multus/multus-admission-controller retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{multus-admission-controller openshift-multus c9eea3b1-f918-4c62-9731-c809988317c1 4579 0 2025-02-23 05:21:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc007c8a197 0xc007c8a198}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.210636 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.214216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.214275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.214289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.214309 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.214324 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.226806 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:20:31.660246101 +0000 UTC Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.235301 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.260605 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.278331 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.296149 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.317727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.317787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.317806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.317833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.317854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.322561 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.334698 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:26 crc kubenswrapper[4740]: E0130 15:57:26.335103 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.341882 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.356085 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.369813 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.382600 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.396776 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.408216 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.420639 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.420738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.421088 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.421104 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.421121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.421133 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.440193 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.525096 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.525139 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.525154 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.525176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.525188 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.627717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.627764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.627775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.627821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.627841 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.731787 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.731887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.731913 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.732335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.732594 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.836789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.836862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.836880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.836906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.836925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.940188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.940253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.940270 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.940297 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:26 crc kubenswrapper[4740]: I0130 15:57:26.940316 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:26Z","lastTransitionTime":"2026-01-30T15:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.043625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.043692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.043714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.043741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.043758 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.147094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.147155 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.147166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.147184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.147197 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.227864 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:51:38.700763904 +0000 UTC Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.241638 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.241808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.241882 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.241843425 +0000 UTC m=+159.878906064 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.241984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.241993 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.242120 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242216 4740 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242252 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242281 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.242266796 +0000 UTC m=+159.879329435 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242291 4740 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242336 4740 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.242252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242441 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.242410139 +0000 UTC m=+159.879472758 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242471 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.24245983 +0000 UTC m=+159.879522439 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242082 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242760 4740 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.242809 4740 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.243295 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.242927532 +0000 UTC m=+159.879990241 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.250694 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.250726 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.250740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.250759 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.250774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.335210 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.335259 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.335258 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.335433 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.335608 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:27 crc kubenswrapper[4740]: E0130 15:57:27.335745 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.353507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.353581 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.353596 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.353619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.353637 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.456263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.456313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.456326 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.456346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.456379 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.559711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.559762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.559774 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.559794 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.559809 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.663112 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.663172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.663194 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.663220 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.663240 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.766657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.766699 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.766710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.766727 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.766738 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.869858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.869902 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.869915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.869934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.869946 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.973910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.973988 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.974012 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.974061 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:27 crc kubenswrapper[4740]: I0130 15:57:27.974088 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:27Z","lastTransitionTime":"2026-01-30T15:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.077008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.077059 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.077069 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.077087 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.077101 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.179832 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.179922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.179948 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.179985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.180011 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.228471 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:26:16.986565147 +0000 UTC Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.283205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.283249 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.283259 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.283274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.283285 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.335137 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:28 crc kubenswrapper[4740]: E0130 15:57:28.335311 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.386341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.386406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.386415 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.386432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.386443 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.490319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.490435 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.490460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.490494 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.490518 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.593801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.593903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.593926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.593952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.593968 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.697399 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.697460 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.697475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.697495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.697509 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.799588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.799645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.799659 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.799680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.799694 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.902741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.902788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.902798 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.902815 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:28 crc kubenswrapper[4740]: I0130 15:57:28.902826 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:28Z","lastTransitionTime":"2026-01-30T15:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.006219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.006264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.006277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.006296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.006311 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.109907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.109956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.109969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.109987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.109999 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.212956 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.213028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.213049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.213083 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.213110 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.229377 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:19:34.195714392 +0000 UTC Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.316779 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.316893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.316909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.316941 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.316958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.335051 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:29 crc kubenswrapper[4740]: E0130 15:57:29.335245 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.335549 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:29 crc kubenswrapper[4740]: E0130 15:57:29.335639 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.335846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:29 crc kubenswrapper[4740]: E0130 15:57:29.335939 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.420784 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.420835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.420850 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.420872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.420887 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.524466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.524901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.525039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.525190 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.525329 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.629095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.629174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.629192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.629230 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.629251 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.732872 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.732942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.732962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.732990 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.733008 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.836854 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.837292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.837528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.837747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.837891 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.941887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.941960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.941978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.942004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:29 crc kubenswrapper[4740]: I0130 15:57:29.942023 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:29Z","lastTransitionTime":"2026-01-30T15:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.046024 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.046089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.046107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.046135 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.046152 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.149533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.149619 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.149638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.149669 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.149691 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.230521 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:15:18.41753603 +0000 UTC Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.253710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.253776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.253799 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.253830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.253853 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.335289 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:30 crc kubenswrapper[4740]: E0130 15:57:30.335546 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.356490 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.356533 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.356543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.356562 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.356573 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.459063 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.459106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.459116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.459134 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.459145 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.561693 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.561781 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.561810 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.561844 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.561868 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.665305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.665771 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.665887 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.665995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.666091 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.768889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.768957 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.768977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.769008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.769032 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.873319 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.873859 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.874071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.874248 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.874453 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.978091 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.978150 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.978165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.978187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:30 crc kubenswrapper[4740]: I0130 15:57:30.978203 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:30Z","lastTransitionTime":"2026-01-30T15:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.081406 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.081464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.081477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.081496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.081508 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.185524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.185590 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.185602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.185620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.185629 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.231190 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:15:01.665416158 +0000 UTC Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.289666 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.289742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.289770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.289801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.289821 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.335493 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.335524 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:31 crc kubenswrapper[4740]: E0130 15:57:31.335741 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:31 crc kubenswrapper[4740]: E0130 15:57:31.335876 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.336479 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:31 crc kubenswrapper[4740]: E0130 15:57:31.336926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.392946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.393026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.393046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.393074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.393094 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.495408 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.495455 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.495465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.495479 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.495487 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.598706 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.598804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.598825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.598853 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.598872 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.702866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.702942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.702960 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.702987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.703006 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.806723 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.806796 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.806814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.806847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.806889 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.910311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.910400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.910419 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.910445 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:31 crc kubenswrapper[4740]: I0130 15:57:31.910463 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:31Z","lastTransitionTime":"2026-01-30T15:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.014634 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.014712 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.014731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.014762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.014783 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.118829 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.118926 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.118939 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.118967 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.118982 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.222271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.222339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.222388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.222409 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.222420 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.232148 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:51:04.058530048 +0000 UTC Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.325200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.325304 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.325324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.325386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.325427 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.334806 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:32 crc kubenswrapper[4740]: E0130 15:57:32.334968 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.429217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.429287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.429305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.429337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.429386 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.532827 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.532915 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.532935 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.532965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.532988 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.635627 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.635665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.635677 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.635692 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.635705 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.738421 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.738470 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.738483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.738501 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.738516 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.841755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.841801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.841816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.841839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.841854 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.945740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.945817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.945840 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.945868 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:32 crc kubenswrapper[4740]: I0130 15:57:32.945888 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:32Z","lastTransitionTime":"2026-01-30T15:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.050075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.050133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.050147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.050170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.050185 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.153916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.153993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.154010 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.154038 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.154059 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.233117 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:09:03.242646074 +0000 UTC Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.257454 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.257523 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.257542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.257573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.257594 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.335008 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.335510 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.335391 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:33 crc kubenswrapper[4740]: E0130 15:57:33.336333 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:33 crc kubenswrapper[4740]: E0130 15:57:33.336444 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:33 crc kubenswrapper[4740]: E0130 15:57:33.336746 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.357480 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.360337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.360398 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.360414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.360438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.360454 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.379716 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.397437 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.420697 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.450299 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:24Z\\\",\\\"message\\\":\\\"5-08-24T17:21:41Z]\\\\nI0130 15:57:24.670463 6817 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7c7j6\\\\nI0130 15:57:24.670455 6817 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 15:57:24.670479 6817 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0130 15:57:24.668922 6817 services_controller.go:434] Service openshift-multus/multus-admission-controller retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{multus-admission-controller openshift-multus c9eea3b1-f918-4c62-9731-c809988317c1 4579 0 2025-02-23 05:21:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc007c8a197 0xc007c8a198}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.463580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.463652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.463675 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.463705 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.463727 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.466831 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.490803 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.512732 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.532655 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.551281 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.566893 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.566982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.567000 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.567050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.567067 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.575273 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.597418 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.621425 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.639883 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.658716 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.670636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.670695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.670711 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.670731 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.670745 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.676297 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.694291 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.716996 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:33Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.774199 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.774276 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.774292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.774313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.774328 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.878402 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.878449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.878465 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.878486 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.878498 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.982239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.982289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.982301 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.982321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:33 crc kubenswrapper[4740]: I0130 15:57:33.982335 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:33Z","lastTransitionTime":"2026-01-30T15:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.085205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.085277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.085293 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.085313 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.085327 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.187748 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.187793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.187802 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.187822 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.187833 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.234140 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:36:47.442471047 +0000 UTC Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.290929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.291004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.291018 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.291037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.291050 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.335312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:34 crc kubenswrapper[4740]: E0130 15:57:34.335493 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.394629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.394697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.394716 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.394738 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.394759 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.497549 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.497632 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.497651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.497680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.497701 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.608117 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.608172 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.608185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.608216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.608231 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.711814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.711891 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.711906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.711933 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.711946 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.815239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.815311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.815333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.815388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.815409 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.919884 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.920029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.920071 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.920099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:34 crc kubenswrapper[4740]: I0130 15:57:34.920117 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:34Z","lastTransitionTime":"2026-01-30T15:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.023573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.023638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.023651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.023680 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.023700 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.126863 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.126929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.126940 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.126962 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.126976 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.230478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.230516 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.230528 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.230545 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.230556 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.235513 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:39:53.81600717 +0000 UTC Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.333635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.333707 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.333718 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.333744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.333754 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.334567 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.334627 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.334655 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:35 crc kubenswrapper[4740]: E0130 15:57:35.334701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:35 crc kubenswrapper[4740]: E0130 15:57:35.334846 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:35 crc kubenswrapper[4740]: E0130 15:57:35.335094 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.437114 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.437160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.437171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.437187 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.437200 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.540920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.541005 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.541034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.541065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.541088 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.644598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.644682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.644702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.644732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.644755 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.747978 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.748058 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.748076 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.748103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.748123 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.850922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.850971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.850983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.851001 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.851013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.932651 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.932714 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.932732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.932756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.932774 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: E0130 15:57:35.959453 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.964961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.965008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.965027 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.965051 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.965069 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:35 crc kubenswrapper[4740]: E0130 15:57:35.986048 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:35Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.992294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.992400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.992426 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.992456 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:35 crc kubenswrapper[4740]: I0130 15:57:35.992478 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:35Z","lastTransitionTime":"2026-01-30T15:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: E0130 15:57:36.014738 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.020201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.020274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.020329 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.020383 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.020407 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: E0130 15:57:36.041833 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.047559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.047636 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.047657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.047682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.047703 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: E0130 15:57:36.067270 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:36Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:36 crc kubenswrapper[4740]: E0130 15:57:36.067408 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.068955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.068977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.068985 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.068995 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.069004 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.172877 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.173617 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.173766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.173876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.173946 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.236840 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:28:58.145002784 +0000 UTC Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.277605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.277986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.278103 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.278227 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.278317 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.335458 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:36 crc kubenswrapper[4740]: E0130 15:57:36.336043 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.380702 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.380750 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.380766 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.380789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.380806 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.484898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.484964 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.484983 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.485004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.485021 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.588305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.588394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.588413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.588438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.588456 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.691668 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.691733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.691751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.691775 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.691794 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.795432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.795520 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.795534 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.795556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.795574 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.899032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.899122 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.899138 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.899174 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:36 crc kubenswrapper[4740]: I0130 15:57:36.899190 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:36Z","lastTransitionTime":"2026-01-30T15:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.002601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.002672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.002691 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.002717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.002737 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.106106 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.106161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.106184 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.106214 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.106242 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.209042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.209116 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.209140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.209170 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.209193 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.237741 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:21:13.186076146 +0000 UTC Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.312160 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.312219 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.312244 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.312277 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.312300 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.334919 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.334974 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:37 crc kubenswrapper[4740]: E0130 15:57:37.335109 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.335151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:37 crc kubenswrapper[4740]: E0130 15:57:37.335331 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:37 crc kubenswrapper[4740]: E0130 15:57:37.335405 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.415970 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.416049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.416067 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.416094 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.416116 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.519958 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.520046 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.520113 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.520146 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.520170 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.623489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.623550 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.623570 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.623595 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.623615 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.726952 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.727322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.727446 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.727547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.727642 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.831390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.831473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.831495 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.831527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.831549 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.935586 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.935986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.936183 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.936415 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:37 crc kubenswrapper[4740]: I0130 15:57:37.936583 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:37Z","lastTransitionTime":"2026-01-30T15:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.040582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.040644 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.040658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.040719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.040736 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.143786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.143866 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.143883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.143912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.143932 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.237943 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:14:55.054504088 +0000 UTC Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.247341 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.247432 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.247450 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.247477 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.247498 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.334755 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:38 crc kubenswrapper[4740]: E0130 15:57:38.335163 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.352548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.352631 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.352657 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.352687 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.352712 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.455628 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.455676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.455685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.455703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.455713 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.558697 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.558756 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.558769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.558790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.558809 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.661813 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.661874 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.661885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.661907 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.661923 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.765200 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.765261 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.765278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.765303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.765320 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.868598 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.868658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.868679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.868709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.868732 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.971762 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.971820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.971838 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.971865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:38 crc kubenswrapper[4740]: I0130 15:57:38.971885 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:38Z","lastTransitionTime":"2026-01-30T15:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.075016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.075098 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.075119 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.075148 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.075167 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.178496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.178621 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.178682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.178715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.178787 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.239084 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:45:47.698041196 +0000 UTC Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.282060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.282185 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.282205 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.282239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.282261 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.335031 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.335182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:39 crc kubenswrapper[4740]: E0130 15:57:39.335257 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.335325 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:39 crc kubenswrapper[4740]: E0130 15:57:39.335534 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:39 crc kubenswrapper[4740]: E0130 15:57:39.335670 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.385719 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.385801 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.385825 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.385858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.385884 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.489302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.489388 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.489411 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.489438 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.489461 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.592824 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.592909 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.592947 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.592981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.593003 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.696496 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.696551 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.696570 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.696602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.696619 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.799820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.800188 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.800294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.800423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.800525 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.902755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.902824 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.902833 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.902851 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:39 crc kubenswrapper[4740]: I0130 15:57:39.902862 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:39Z","lastTransitionTime":"2026-01-30T15:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.006417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.006457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.006468 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.006484 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.006495 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.109652 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.109710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.109720 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.109744 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.109755 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.212942 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.213311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.213428 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.213457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.213470 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.240181 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:04:18.092901338 +0000 UTC Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.319166 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.319264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.319282 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.319321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.319340 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.334712 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:40 crc kubenswrapper[4740]: E0130 15:57:40.334970 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.422390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.422777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.422879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.422945 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.423013 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.525916 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.526271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.526332 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.526448 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.526514 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.629625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.629667 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.629678 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.629696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.629709 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.732193 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.732247 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.732264 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.732290 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.732311 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.834645 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.835089 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.835179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.835268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.835394 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.938180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.938236 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.938246 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.938266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:40 crc kubenswrapper[4740]: I0130 15:57:40.938281 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:40Z","lastTransitionTime":"2026-01-30T15:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.040394 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.040755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.040860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.040961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.041049 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.143910 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.143949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.143958 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.143976 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.143986 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.241088 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:43:59.682656277 +0000 UTC Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.246321 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.246367 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.246379 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.246396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.246406 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.335413 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.335484 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.335626 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.335739 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.335844 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.335936 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.336773 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.336935 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.348526 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.348577 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.348588 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.348606 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.348622 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.451782 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.451826 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.451839 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.451861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.451879 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.554906 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.554955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.554969 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.554993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.555007 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.657517 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.657569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.657580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.657602 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.657627 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.760449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.760519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.760542 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.760571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.760590 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.864064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.864192 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.864217 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.864245 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.864264 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.910561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.910864 4740 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:57:41 crc kubenswrapper[4740]: E0130 15:57:41.911006 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs podName:7f93a9ce-6677-48e3-9476-c37aa40b6347 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:45.910972028 +0000 UTC m=+174.548034677 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs") pod "network-metrics-daemon-krvcv" (UID: "7f93a9ce-6677-48e3-9476-c37aa40b6347") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.967548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.967603 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.967617 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.967638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:41 crc kubenswrapper[4740]: I0130 15:57:41.967651 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:41Z","lastTransitionTime":"2026-01-30T15:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.070717 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.070780 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.070793 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.070816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.070830 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.173912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.173971 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.173982 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.174004 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.174022 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.241702 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:12:07.352451766 +0000 UTC Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.277171 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.277225 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.277238 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.277287 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.277302 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.335012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:42 crc kubenswrapper[4740]: E0130 15:57:42.335458 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.364320 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.380591 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.380658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.380676 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.380704 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.380722 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.483475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.483532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.483552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.483579 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.483604 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.586335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.586400 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.586414 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.586433 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.586448 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.689823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.689886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.689899 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.689922 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.689937 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.792611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.792665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.792690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.792724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.792741 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.901020 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.901294 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.901317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.901373 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:42 crc kubenswrapper[4740]: I0130 15:57:42.901404 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:42Z","lastTransitionTime":"2026-01-30T15:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.004987 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.005073 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.005097 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.005169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.005193 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.108898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.108974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.108992 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.109022 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.109044 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.212434 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.212514 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.212527 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.212549 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.212563 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.242194 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:11:21.338461531 +0000 UTC Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.316274 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.316335 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.316390 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.316425 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.316444 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.335180 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.335300 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:43 crc kubenswrapper[4740]: E0130 15:57:43.335382 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.335478 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:43 crc kubenswrapper[4740]: E0130 15:57:43.335582 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:43 crc kubenswrapper[4740]: E0130 15:57:43.335855 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.353643 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.369067 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.382520 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.415025 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:24Z\\\",\\\"message\\\":\\\"5-08-24T17:21:41Z]\\\\nI0130 15:57:24.670463 6817 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7c7j6\\\\nI0130 15:57:24.670455 6817 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 15:57:24.670479 6817 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0130 15:57:24.668922 6817 services_controller.go:434] Service openshift-multus/multus-admission-controller retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{multus-admission-controller openshift-multus c9eea3b1-f918-4c62-9731-c809988317c1 4579 0 2025-02-23 05:21:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc007c8a197 0xc007c8a198}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.423690 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.423763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.423785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.423813 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.423828 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.466773 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.484530 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.501448 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.516654 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.526415 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.526641 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.526730 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.526823 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.526922 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.535217 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.557292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.575684 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.595035 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.620769 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.629890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.629937 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.629955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.629977 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.629995 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.637578 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.658275 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.676130 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.701804 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7af2c418-73ad-4471-9c4a-1648d09eaa14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a187615099d8eb2316e90d5d3cf8f9193fdc55f2362c08440c00bdcac439cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7831eaa07c89ee936c1eb0d2578e583401bfdf20a61449990a7975b3e2972a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84e20da08abae11660e7c75659fa97583bce84e3e01492f36db2adb9e4d90514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://779f48007b31f1c306d0ce8d2a473a667ebc1bb20af110df279c975b3417d328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf22ff785037823793cdf211ea74d0bb088e209e79b7bdcfa8868be2756fec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.724292 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.733168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.733601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.733788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.733901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.734012 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.743078 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:43Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.837665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.837753 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.837783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.837818 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.837846 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.942858 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.943049 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.943070 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.943100 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:43 crc kubenswrapper[4740]: I0130 15:57:43.943120 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:43Z","lastTransitionTime":"2026-01-30T15:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.046700 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.046772 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.046842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.046923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.046947 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.150397 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.150743 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.150830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.150984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.151081 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.242873 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:35:00.073903992 +0000 UTC Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.254478 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.254546 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.254571 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.254605 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.254630 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.335215 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:44 crc kubenswrapper[4740]: E0130 15:57:44.335710 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.358044 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.358093 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.358108 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.358130 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.358150 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.460747 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.460806 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.460820 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.460848 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.460868 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.564483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.564539 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.564553 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.564613 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.564625 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.667573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.667732 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.667752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.667776 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.667796 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.770961 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.771034 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.771055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.771086 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.771109 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.874016 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.874081 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.874099 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.874124 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.874144 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.977658 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.978107 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.978568 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.978832 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:44 crc kubenswrapper[4740]: I0130 15:57:44.979077 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:44Z","lastTransitionTime":"2026-01-30T15:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.083283 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.083741 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.083898 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.084074 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.084202 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.187695 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.187765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.187783 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.187813 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.187832 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.243443 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:08:31.224620196 +0000 UTC Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.290770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.291068 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.291291 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.291563 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.291761 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.335298 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.335476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.335743 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:45 crc kubenswrapper[4740]: E0130 15:57:45.335744 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:45 crc kubenswrapper[4740]: E0130 15:57:45.335887 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:45 crc kubenswrapper[4740]: E0130 15:57:45.336060 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.394965 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.395032 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.395050 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.395080 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.395101 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.499221 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.499298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.499317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.499345 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.499389 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.603609 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.603679 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.603696 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.603725 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.603743 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.707213 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.707278 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.707296 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.707324 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.707343 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.810865 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.810936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.810955 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.810984 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.811009 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.914077 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.914161 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.914180 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.914209 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:45 crc kubenswrapper[4740]: I0130 15:57:45.914229 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:45Z","lastTransitionTime":"2026-01-30T15:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.017230 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.017311 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.017330 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.017405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.017427 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.121147 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.121255 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.121285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.121314 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.121334 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.224663 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.224713 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.224724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.224742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.224753 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.244283 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:16:05.749484082 +0000 UTC Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.268449 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.268524 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.268543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.268573 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.268591 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.289932 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.296168 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.296224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.296235 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.296253 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.296264 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.316119 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.321777 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.321842 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.321860 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.321903 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.321925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.335017 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.335197 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.343422 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.348298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.348384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.348403 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.348475 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.348497 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.365251 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.369835 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.369883 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.369901 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.369924 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.369939 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.382914 4740 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f3fdd8ea-a373-4a34-8018-9155cc4dd491\\\",\\\"systemUUID\\\":\\\"cbfd1cc5-d98d-49aa-89cf-5db774a30b6e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:46 crc kubenswrapper[4740]: E0130 15:57:46.383063 4740 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.385333 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.385407 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.385420 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.385444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.385464 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.487876 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.487919 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.487929 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.487946 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.487958 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.591165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.591210 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.591229 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.591322 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.591345 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.695212 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.695292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.695315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.695346 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.695422 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.798473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.798537 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.798574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.798612 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.798642 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.901934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.902017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.902042 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.902075 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:46 crc kubenswrapper[4740]: I0130 15:57:46.902096 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:46Z","lastTransitionTime":"2026-01-30T15:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.005384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.005444 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.005464 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.005489 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.005507 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.108739 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.108821 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.108845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.108879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.108901 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.211607 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.211640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.211648 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.211664 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.211675 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.244734 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:47:38.942985414 +0000 UTC Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.315028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.315405 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.315418 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.315436 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.315448 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.334406 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.334507 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.335017 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:47 crc kubenswrapper[4740]: E0130 15:57:47.335230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:47 crc kubenswrapper[4740]: E0130 15:57:47.335284 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:47 crc kubenswrapper[4740]: E0130 15:57:47.335415 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.417643 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.418029 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.418126 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.418196 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.418255 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.521830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.521890 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.521908 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.521934 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.521964 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.625224 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.625285 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.625305 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.625337 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.625395 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.728521 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.728580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.728600 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.728625 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.728644 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.831897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.832334 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.832785 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.833218 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.833638 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.937195 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.937252 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.937268 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.937289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:47 crc kubenswrapper[4740]: I0130 15:57:47.937302 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:47Z","lastTransitionTime":"2026-01-30T15:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.041505 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.041582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.041601 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.041629 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.041647 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.145201 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.145254 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.145266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.145284 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.145299 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.245236 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:52:15.61608039 +0000 UTC Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.248790 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.248847 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.248862 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.248885 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.248899 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.334710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:48 crc kubenswrapper[4740]: E0130 15:57:48.335026 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.352165 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.352237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.352266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.352298 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.352317 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.456176 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.456482 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.456503 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.456532 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.456552 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.560179 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.560251 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.560263 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.560289 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.560304 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.664649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.664737 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.664761 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.664797 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.664818 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.768616 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.768708 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.768735 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.768770 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.768790 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.873228 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.873281 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.873295 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.873315 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.873329 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.976949 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.977037 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.977065 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.977102 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:48 crc kubenswrapper[4740]: I0130 15:57:48.977130 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:48Z","lastTransitionTime":"2026-01-30T15:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.081569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.081665 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.081703 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.081740 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.081759 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.185804 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.185873 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.185889 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.185917 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.185936 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.245740 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:01:07.627565992 +0000 UTC Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.288974 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.289028 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.289039 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.289060 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.289072 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.335210 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.335238 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:49 crc kubenswrapper[4740]: E0130 15:57:49.335434 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.335468 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:49 crc kubenswrapper[4740]: E0130 15:57:49.335635 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:49 crc kubenswrapper[4740]: E0130 15:57:49.335737 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.392672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.392722 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.392733 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.392751 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.392763 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.496671 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.496746 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.496765 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.496789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.496805 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.600041 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.600121 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.600140 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.600169 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.600187 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.704392 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.704466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.704543 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.704582 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.704607 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.807685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.807755 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.807769 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.807789 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.807803 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.910519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.910688 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.910709 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.910736 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:49 crc kubenswrapper[4740]: I0130 15:57:49.910755 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:49Z","lastTransitionTime":"2026-01-30T15:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.013457 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.013519 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.013536 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.013561 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.013578 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.116556 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.116604 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.116617 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.116638 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.116651 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.219292 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.219376 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.219393 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.219417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.219434 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.247578 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:32:41.78411202 +0000 UTC Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.322404 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.322466 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.322480 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.322507 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.322528 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.334883 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:50 crc kubenswrapper[4740]: E0130 15:57:50.335018 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.425317 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.425386 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.425396 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.425413 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.425424 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.528886 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.528986 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.529009 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.529047 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.529072 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.633548 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.633620 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.633639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.633670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.633691 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.736745 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.736831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.736852 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.736920 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.736941 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.840819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.840879 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.840897 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.840923 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.841015 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.944932 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.945007 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.945064 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.945095 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:50 crc kubenswrapper[4740]: I0130 15:57:50.945114 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:50Z","lastTransitionTime":"2026-01-30T15:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.048473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.048538 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.048547 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.048567 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.048579 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.151611 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.151764 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.151788 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.151817 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.151836 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.248253 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:47:10.602907333 +0000 UTC Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.254672 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.254715 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.254724 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.254742 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.254752 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.347770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.348056 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:51 crc kubenswrapper[4740]: E0130 15:57:51.348054 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:51 crc kubenswrapper[4740]: E0130 15:57:51.348593 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.349009 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:51 crc kubenswrapper[4740]: E0130 15:57:51.349445 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.356786 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.356819 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.356830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.356845 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.356856 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.460423 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.460473 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.460483 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.460502 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.460514 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.563928 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.563981 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.563993 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.564017 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.564031 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.667509 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.667555 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.667569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.667589 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.667601 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.771133 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.771240 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.771266 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.771303 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.771330 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.874936 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.875008 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.875026 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.875055 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.875078 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.978299 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.978374 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.978384 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.978412 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:51 crc kubenswrapper[4740]: I0130 15:57:51.978423 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:51Z","lastTransitionTime":"2026-01-30T15:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.082158 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.082257 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.082275 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.082302 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.082323 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.186569 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.186635 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.186653 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.186682 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.186702 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.248714 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:00:10.564252377 +0000 UTC Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.289685 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.289760 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.289778 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.289807 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.289829 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.334975 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:52 crc kubenswrapper[4740]: E0130 15:57:52.335193 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.393429 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.393504 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.393522 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.393552 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.393572 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.496752 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.496814 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.496830 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.496861 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.496875 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.599698 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.599763 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.599792 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.599816 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.599831 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.702580 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.702639 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.702649 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.702670 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.702683 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.806237 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.806339 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.806417 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.806455 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.806479 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.909574 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.909640 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.909654 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.909710 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:52 crc kubenswrapper[4740]: I0130 15:57:52.909731 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:52Z","lastTransitionTime":"2026-01-30T15:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.013485 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.013549 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.013559 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.013576 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.013587 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:53Z","lastTransitionTime":"2026-01-30T15:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.117142 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.117216 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.117239 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.117271 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.117292 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:53Z","lastTransitionTime":"2026-01-30T15:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:53 crc kubenswrapper[4740]: E0130 15:57:53.217584 4740 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.249443 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:13:07.718381124 +0000 UTC Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.334469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.334485 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.334521 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:53 crc kubenswrapper[4740]: E0130 15:57:53.335440 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:53 crc kubenswrapper[4740]: E0130 15:57:53.335524 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:53 crc kubenswrapper[4740]: E0130 15:57:53.335585 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.359695 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 15:56:16.450418 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 15:56:16.451102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2319707770/tls.crt::/tmp/serving-cert-2319707770/tls.key\\\\\\\"\\\\nI0130 15:56:22.557558 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 15:56:22.564485 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 15:56:22.564517 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 15:56:22.564539 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 15:56:22.564546 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 15:56:22.581427 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 15:56:22.581465 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581472 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 15:56:22.581480 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 15:56:22.581485 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 15:56:22.581490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 15:56:22.581507 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 15:56:22.581772 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 15:56:22.583875 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.383195 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c739d30317fc869442692ce4b2991528af330a5974780f98e06808f652c2640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.407812 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1dbeb579497ac32ef270219a2d203d800fdee02af6349944270324b6d147dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.420404 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4827ec5218b16e989bfd1f456459f28d6f925b03dfbe75b1a77716cad50555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54m64\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7c7j6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.437623 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g5497" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ece215f-ed67-4d10-8e39-85d49a052d52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d6dd6d1896cd890fac5aea3bdbeed77d72d2a797b8a33be33b98869fd4ded2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3830e497038d28096b64b2d071d8e9f1db6bd5e89758019fc65d3473fc12a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f80996ed2620c32267b648fa86ac55aef46f59ab7dd32765b18216ab403072a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598e0ef234f654d9c0dea1a3055215e1dc1906523f7044ad8c8afeeaf62a9dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ade4bbe825ad9b57bb483a11caf7af021248a4c797a4640f04be82f27cab2ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91bb5799aa22fda76f465027e47eb07d990ae2afcf7fd9eb3ec40b842a25122b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68e618e453d426c31a6f68b73552da568b5c080a975a271252247685fa091997\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbh6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g5497\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.457371 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c06ab51-b857-47c7-a13a-e64edae96756\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:24Z\\\",\\\"message\\\":\\\"5-08-24T17:21:41Z]\\\\nI0130 15:57:24.670463 6817 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7c7j6\\\\nI0130 15:57:24.670455 6817 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 15:57:24.670479 6817 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0130 15:57:24.668922 6817 services_controller.go:434] Service openshift-multus/multus-admission-controller retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{multus-admission-controller openshift-multus c9eea3b1-f918-4c62-9731-c809988317c1 4579 0 2025-02-23 05:21:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:multus-admission-controller] map[service.alpha.openshift.io/serving-cert-secret-name:multus-admission-controller-secret service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 Network cluster 8d01ddba-7e05-4639-926a-4485de3b6327 0xc007c8a197 0xc007c8a198}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:webhook,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},ServicePort{Name\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:57:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6hnwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhsjm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.473188 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-krvcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7f93a9ce-6677-48e3-9476-c37aa40b6347\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxnt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-krvcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.487104 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4299b252-176c-4171-9cfa-d2642a85c35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://997ccd83c8451b5a2ac8162a9af94145948684a89d85acdc55782a7b58e5cdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3863f53ebefb5be65e1fecdeeb0c11e13af387b4f87b4a70967148139dc5467\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25c704b4b5d3c5b1bafd42681c967c2ac20817d35191d603c7eaa0f0a5aaf9d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.505045 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.522962 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pkzlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e65088cb-e700-4af1-b788-af399f918bd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T15:57:17Z\\\",\\\"message\\\":\\\"2026-01-30T15:56:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb\\\\n2026-01-30T15:56:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9c47f675-e1b5-4b2b-8ec7-364165feaedb to /host/opt/cni/bin/\\\\n2026-01-30T15:56:32Z [verbose] multus-daemon started\\\\n2026-01-30T15:56:32Z [verbose] Readiness Indicator file check\\\\n2026-01-30T15:57:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxcq5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pkzlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.539194 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dd84fba-33fd-4da1-829a-d11f4be826b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0418c889d7496f859df925d46b79d68b1b6701cbed79e20d2965433bff4008\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://218482bad9b91234540dcdb40cd63f9a0c3ed4b02fcf0409f01fa5c955c56e4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec9f2bcb3de3fece3e8ba38263f4db30bb066216b97cac496c2008ddd8ba1ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77852c6fe28242067da7abdbf3b40a26eb4d6e86c8749e9ee3a38c4278ad2b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.552000 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8121a4c-4723-4864-ad5b-e0a2e78ffa3e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b540992041764ee2a7fabcdda74b222e97ce330e8488b265a5486b559f09aba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a731839f66cee9f88e630d2342700d55ac778f121190f343589f77157ea15b05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.584511 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7af2c418-73ad-4471-9c4a-1648d09eaa14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a187615099d8eb2316e90d5d3cf8f9193fdc55f2362c08440c00bdcac439cc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7831eaa07c89ee936c1eb0d2578e583401bfdf20a61449990a7975b3e2972a55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84e20da08abae11660e7c75659fa97583bce84e3e01492f36db2adb9e4d90514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://779f48007b31f1c306d0ce8d2a473a667ebc1bb20af110df279c975b3417d328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bf22ff785037823793cdf211ea74d0bb088e209e79b7bdcfa8868be2756fec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14bad37d616cd79f0397cfbe88861d17767c7edca69866641ed052cd17a59c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7702d55f3234ac65cc2dbacac180c56f746190afcbc9723eeb7de0d45617a98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:55:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:55:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c87a7089f775e75e9ebdbc6f43f0533927f91cb329644573bbd5e4088af185e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T15:56:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T15:56:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:55:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.603242 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: E0130 15:57:53.605198 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.622558 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://640e05b09186ab3d74a1d206195a2e4bbf9521a3fd53031a5a65dcf0f511f762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35d1bafb7ecf418a6d0eba1ec950b4f782897003332f3f64a80af99fb979290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.637159 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2pc22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d02a598-e35a-4a24-bcf9-dc941d1d92d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://779fa7c31057e6b7ad6cba292f5d01f346959b57f2505ab305cd554e8a4570f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvkcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:28Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2pc22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.654896 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.670113 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xtbq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f16748fa-365c-4996-856a-4cd9a1166795\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b09dcb0f669b5c3598b19a1c5e94603db56fa747bb970e3fc34d463ac2467280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwkv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xtbq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:53 crc kubenswrapper[4740]: I0130 15:57:53.686900 4740 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36df7a4d-789b-4344-83ca-02e0c62f0fd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T15:56:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a764ef990ffce1b7f756fab1b8aa4e7ba2d6c10964ce6ab433735c8f99034b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a937882a5ad38f23886c094abfe3194c3e56ffdf66047a778383c38c6499fcb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T15:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgr4p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T15:56:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6vzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T15:57:53Z is after 2025-08-24T17:21:41Z" Jan 30 15:57:54 crc kubenswrapper[4740]: I0130 15:57:54.250574 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:57:24.757217751 +0000 UTC Jan 30 15:57:54 crc kubenswrapper[4740]: I0130 15:57:54.335058 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:54 crc kubenswrapper[4740]: E0130 15:57:54.335273 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:55 crc kubenswrapper[4740]: I0130 15:57:55.250823 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:43:00.430163756 +0000 UTC Jan 30 15:57:55 crc kubenswrapper[4740]: I0130 15:57:55.335326 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:55 crc kubenswrapper[4740]: I0130 15:57:55.335478 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:55 crc kubenswrapper[4740]: E0130 15:57:55.335652 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:55 crc kubenswrapper[4740]: I0130 15:57:55.335728 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:55 crc kubenswrapper[4740]: E0130 15:57:55.335829 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:55 crc kubenswrapper[4740]: E0130 15:57:55.335937 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:55 crc kubenswrapper[4740]: I0130 15:57:55.337445 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 15:57:55 crc kubenswrapper[4740]: E0130 15:57:55.337792 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhsjm_openshift-ovn-kubernetes(2c06ab51-b857-47c7-a13a-e64edae96756)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.251101 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:46:25.136746791 +0000 UTC Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.334873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:56 crc kubenswrapper[4740]: E0130 15:57:56.335095 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.690831 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.690880 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.690892 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.690912 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.690925 4740 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T15:57:56Z","lastTransitionTime":"2026-01-30T15:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.768377 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6"] Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.769084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.772324 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.773541 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.773648 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.773685 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.816040 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xtbq6" podStartSLOduration=93.816003846 podStartE2EDuration="1m33.816003846s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.815572765 +0000 UTC m=+125.452635394" watchObservedRunningTime="2026-01-30 15:57:56.816003846 +0000 UTC m=+125.453066485" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.855938 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.855907249 podStartE2EDuration="1m33.855907249s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.855249453 +0000 UTC m=+125.492312092" watchObservedRunningTime="2026-01-30 15:57:56.855907249 +0000 UTC m=+125.492969848" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.856204 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6vzkq" podStartSLOduration=92.856199237 podStartE2EDuration="1m32.856199237s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.830325702 +0000 UTC m=+125.467388311" watchObservedRunningTime="2026-01-30 15:57:56.856199237 +0000 UTC m=+125.493261836" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.894479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.894551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b6de6c2-4b65-40af-8d57-05c624de13ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.894660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b6de6c2-4b65-40af-8d57-05c624de13ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.894699 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6de6c2-4b65-40af-8d57-05c624de13ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.894770 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.907503 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podStartSLOduration=93.907475773 podStartE2EDuration="1m33.907475773s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.907171666 +0000 UTC m=+125.544234315" watchObservedRunningTime="2026-01-30 15:57:56.907475773 +0000 UTC m=+125.544538372" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.932787 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-g5497" podStartSLOduration=93.932747302 podStartE2EDuration="1m33.932747302s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.931760328 +0000 UTC m=+125.568822937" watchObservedRunningTime="2026-01-30 15:57:56.932747302 +0000 UTC m=+125.569809951" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.980399 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=94.980374908 podStartE2EDuration="1m34.980374908s" podCreationTimestamp="2026-01-30 15:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:56.980204464 +0000 UTC m=+125.617267063" watchObservedRunningTime="2026-01-30 15:57:56.980374908 +0000 UTC m=+125.617437507" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996340 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6de6c2-4b65-40af-8d57-05c624de13ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996490 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996542 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b6de6c2-4b65-40af-8d57-05c624de13ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996659 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3b6de6c2-4b65-40af-8d57-05c624de13ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.996757 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b6de6c2-4b65-40af-8d57-05c624de13ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:56 crc kubenswrapper[4740]: I0130 15:57:56.997906 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3b6de6c2-4b65-40af-8d57-05c624de13ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.004504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b6de6c2-4b65-40af-8d57-05c624de13ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.014875 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3b6de6c2-4b65-40af-8d57-05c624de13ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n9tv6\" (UID: \"3b6de6c2-4b65-40af-8d57-05c624de13ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.017446 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pkzlw" podStartSLOduration=94.01741714 podStartE2EDuration="1m34.01741714s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:57.016709343 +0000 UTC m=+125.653771942" watchObservedRunningTime="2026-01-30 15:57:57.01741714 +0000 UTC m=+125.654479739" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.060764 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.060736819 podStartE2EDuration="1m9.060736819s" podCreationTimestamp="2026-01-30 15:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:57.04712974 +0000 UTC m=+125.684192359" watchObservedRunningTime="2026-01-30 15:57:57.060736819 +0000 UTC m=+125.697799418" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.061449 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.061440596 podStartE2EDuration="41.061440596s" podCreationTimestamp="2026-01-30 15:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:57.060926033 +0000 UTC m=+125.697988632" watchObservedRunningTime="2026-01-30 15:57:57.061440596 +0000 UTC m=+125.698503195" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.085752 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.085730441 podStartE2EDuration="15.085730441s" podCreationTimestamp="2026-01-30 15:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:57.084530881 +0000 UTC m=+125.721593480" watchObservedRunningTime="2026-01-30 15:57:57.085730441 +0000 UTC m=+125.722793040" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.097533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.246635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" event={"ID":"3b6de6c2-4b65-40af-8d57-05c624de13ed","Type":"ContainerStarted","Data":"8718e14add3b69b0208b5d1437ce7bb6ff8137ec9438d7bb81fa65d74f808f9b"} Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.251817 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:15:43.214496322 +0000 UTC Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.251886 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.261756 4740 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.334572 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.334599 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:57 crc kubenswrapper[4740]: I0130 15:57:57.334710 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:57 crc kubenswrapper[4740]: E0130 15:57:57.334918 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:57 crc kubenswrapper[4740]: E0130 15:57:57.334794 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:57 crc kubenswrapper[4740]: E0130 15:57:57.335273 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:57:58 crc kubenswrapper[4740]: I0130 15:57:58.253009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" event={"ID":"3b6de6c2-4b65-40af-8d57-05c624de13ed","Type":"ContainerStarted","Data":"6b80bbd623cc8324b70e5df45f008ec41cedf874fa7adc3f968b7257cd346a09"} Jan 30 15:57:58 crc kubenswrapper[4740]: I0130 15:57:58.269730 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n9tv6" podStartSLOduration=95.269706416 podStartE2EDuration="1m35.269706416s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:58.269165162 +0000 UTC m=+126.906227781" watchObservedRunningTime="2026-01-30 15:57:58.269706416 +0000 UTC m=+126.906769015" Jan 30 15:57:58 crc kubenswrapper[4740]: I0130 15:57:58.269993 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2pc22" podStartSLOduration=95.269987273 podStartE2EDuration="1m35.269987273s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:57:57.14553735 +0000 UTC m=+125.782599949" watchObservedRunningTime="2026-01-30 15:57:58.269987273 +0000 UTC m=+126.907049892" Jan 30 15:57:58 crc kubenswrapper[4740]: I0130 15:57:58.335024 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:57:58 crc kubenswrapper[4740]: E0130 15:57:58.335246 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:57:58 crc kubenswrapper[4740]: E0130 15:57:58.607383 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:57:59 crc kubenswrapper[4740]: I0130 15:57:59.335124 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:57:59 crc kubenswrapper[4740]: I0130 15:57:59.335140 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:57:59 crc kubenswrapper[4740]: I0130 15:57:59.335154 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:57:59 crc kubenswrapper[4740]: E0130 15:57:59.336570 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:57:59 crc kubenswrapper[4740]: E0130 15:57:59.336726 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:57:59 crc kubenswrapper[4740]: E0130 15:57:59.336326 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:00 crc kubenswrapper[4740]: I0130 15:58:00.352794 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:00 crc kubenswrapper[4740]: E0130 15:58:00.352986 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:01 crc kubenswrapper[4740]: I0130 15:58:01.334630 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:01 crc kubenswrapper[4740]: I0130 15:58:01.334737 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:01 crc kubenswrapper[4740]: E0130 15:58:01.334853 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:01 crc kubenswrapper[4740]: E0130 15:58:01.334963 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:01 crc kubenswrapper[4740]: I0130 15:58:01.335097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:01 crc kubenswrapper[4740]: E0130 15:58:01.335226 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:02 crc kubenswrapper[4740]: I0130 15:58:02.334776 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:02 crc kubenswrapper[4740]: E0130 15:58:02.336205 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:03 crc kubenswrapper[4740]: I0130 15:58:03.334643 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:03 crc kubenswrapper[4740]: I0130 15:58:03.334872 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:03 crc kubenswrapper[4740]: I0130 15:58:03.335005 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:03 crc kubenswrapper[4740]: E0130 15:58:03.336551 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:03 crc kubenswrapper[4740]: E0130 15:58:03.336746 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:03 crc kubenswrapper[4740]: E0130 15:58:03.337051 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:03 crc kubenswrapper[4740]: E0130 15:58:03.608449 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.281794 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/1.log" Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.283683 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/0.log" Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.283754 4740 generic.go:334] "Generic (PLEG): container finished" podID="e65088cb-e700-4af1-b788-af399f918bd0" containerID="1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46" exitCode=1 Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.283798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerDied","Data":"1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46"} Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.283857 4740 scope.go:117] "RemoveContainer" containerID="db08d74098792934f5789e4139827d850f24001f58629f9d140baca8f351b9a4" Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.284650 4740 scope.go:117] "RemoveContainer" containerID="1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46" Jan 30 15:58:04 crc kubenswrapper[4740]: E0130 15:58:04.284957 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-pkzlw_openshift-multus(e65088cb-e700-4af1-b788-af399f918bd0)\"" pod="openshift-multus/multus-pkzlw" podUID="e65088cb-e700-4af1-b788-af399f918bd0" Jan 30 15:58:04 crc kubenswrapper[4740]: I0130 15:58:04.334735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:04 crc kubenswrapper[4740]: E0130 15:58:04.335324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:05 crc kubenswrapper[4740]: I0130 15:58:05.289854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/1.log" Jan 30 15:58:05 crc kubenswrapper[4740]: I0130 15:58:05.334647 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:05 crc kubenswrapper[4740]: I0130 15:58:05.334747 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:05 crc kubenswrapper[4740]: I0130 15:58:05.334657 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:05 crc kubenswrapper[4740]: E0130 15:58:05.334896 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:05 crc kubenswrapper[4740]: E0130 15:58:05.335312 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:05 crc kubenswrapper[4740]: E0130 15:58:05.335432 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:06 crc kubenswrapper[4740]: I0130 15:58:06.335242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:06 crc kubenswrapper[4740]: E0130 15:58:06.335979 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:07 crc kubenswrapper[4740]: I0130 15:58:07.335435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:07 crc kubenswrapper[4740]: I0130 15:58:07.335484 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:07 crc kubenswrapper[4740]: I0130 15:58:07.335565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:07 crc kubenswrapper[4740]: E0130 15:58:07.336039 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:07 crc kubenswrapper[4740]: E0130 15:58:07.336202 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:07 crc kubenswrapper[4740]: E0130 15:58:07.336716 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:08 crc kubenswrapper[4740]: I0130 15:58:08.334900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:08 crc kubenswrapper[4740]: E0130 15:58:08.335498 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:08 crc kubenswrapper[4740]: E0130 15:58:08.610163 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:58:09 crc kubenswrapper[4740]: I0130 15:58:09.335050 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:09 crc kubenswrapper[4740]: I0130 15:58:09.335050 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:09 crc kubenswrapper[4740]: E0130 15:58:09.335312 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:09 crc kubenswrapper[4740]: I0130 15:58:09.335521 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:09 crc kubenswrapper[4740]: E0130 15:58:09.335581 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:09 crc kubenswrapper[4740]: E0130 15:58:09.336285 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:09 crc kubenswrapper[4740]: I0130 15:58:09.337070 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.309936 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/3.log" Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.312294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerStarted","Data":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.313489 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.335160 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:10 crc kubenswrapper[4740]: E0130 15:58:10.335300 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.343399 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podStartSLOduration=107.343381706 podStartE2EDuration="1m47.343381706s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:10.342994656 +0000 UTC m=+138.980057255" watchObservedRunningTime="2026-01-30 15:58:10.343381706 +0000 UTC m=+138.980444305" Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.515241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-krvcv"] Jan 30 15:58:10 crc kubenswrapper[4740]: I0130 15:58:10.515480 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:10 crc kubenswrapper[4740]: E0130 15:58:10.515677 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:11 crc kubenswrapper[4740]: I0130 15:58:11.335175 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:11 crc kubenswrapper[4740]: I0130 15:58:11.335179 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:11 crc kubenswrapper[4740]: E0130 15:58:11.335389 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:11 crc kubenswrapper[4740]: E0130 15:58:11.335549 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:12 crc kubenswrapper[4740]: I0130 15:58:12.334476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:12 crc kubenswrapper[4740]: I0130 15:58:12.334499 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:12 crc kubenswrapper[4740]: E0130 15:58:12.334711 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:12 crc kubenswrapper[4740]: E0130 15:58:12.334868 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:13 crc kubenswrapper[4740]: I0130 15:58:13.334678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:13 crc kubenswrapper[4740]: I0130 15:58:13.334776 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:13 crc kubenswrapper[4740]: E0130 15:58:13.336844 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:13 crc kubenswrapper[4740]: E0130 15:58:13.337133 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:13 crc kubenswrapper[4740]: E0130 15:58:13.611013 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:58:14 crc kubenswrapper[4740]: I0130 15:58:14.335080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:14 crc kubenswrapper[4740]: I0130 15:58:14.335080 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:14 crc kubenswrapper[4740]: E0130 15:58:14.335282 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:14 crc kubenswrapper[4740]: E0130 15:58:14.335339 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:15 crc kubenswrapper[4740]: I0130 15:58:15.334717 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:15 crc kubenswrapper[4740]: I0130 15:58:15.334947 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:15 crc kubenswrapper[4740]: E0130 15:58:15.334946 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:15 crc kubenswrapper[4740]: E0130 15:58:15.335191 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:16 crc kubenswrapper[4740]: I0130 15:58:16.334587 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:16 crc kubenswrapper[4740]: E0130 15:58:16.334819 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:16 crc kubenswrapper[4740]: I0130 15:58:16.335270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:16 crc kubenswrapper[4740]: E0130 15:58:16.336597 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:17 crc kubenswrapper[4740]: I0130 15:58:17.335625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:17 crc kubenswrapper[4740]: E0130 15:58:17.336907 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:17 crc kubenswrapper[4740]: I0130 15:58:17.336565 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:17 crc kubenswrapper[4740]: E0130 15:58:17.337440 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:18 crc kubenswrapper[4740]: I0130 15:58:18.335255 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:18 crc kubenswrapper[4740]: I0130 15:58:18.335625 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:18 crc kubenswrapper[4740]: E0130 15:58:18.335750 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:18 crc kubenswrapper[4740]: I0130 15:58:18.336025 4740 scope.go:117] "RemoveContainer" containerID="1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46" Jan 30 15:58:18 crc kubenswrapper[4740]: E0130 15:58:18.336083 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:18 crc kubenswrapper[4740]: E0130 15:58:18.613146 4740 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 15:58:19 crc kubenswrapper[4740]: I0130 15:58:19.334995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:19 crc kubenswrapper[4740]: I0130 15:58:19.335001 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:19 crc kubenswrapper[4740]: E0130 15:58:19.336188 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:19 crc kubenswrapper[4740]: E0130 15:58:19.336218 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:19 crc kubenswrapper[4740]: I0130 15:58:19.351777 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/1.log" Jan 30 15:58:19 crc kubenswrapper[4740]: I0130 15:58:19.351873 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerStarted","Data":"4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c"} Jan 30 15:58:20 crc kubenswrapper[4740]: I0130 15:58:20.335291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:20 crc kubenswrapper[4740]: E0130 15:58:20.335536 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:20 crc kubenswrapper[4740]: I0130 15:58:20.335833 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:20 crc kubenswrapper[4740]: E0130 15:58:20.336251 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:21 crc kubenswrapper[4740]: I0130 15:58:21.334830 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:21 crc kubenswrapper[4740]: E0130 15:58:21.335053 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:21 crc kubenswrapper[4740]: I0130 15:58:21.335571 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:21 crc kubenswrapper[4740]: E0130 15:58:21.335925 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:22 crc kubenswrapper[4740]: I0130 15:58:22.334543 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:22 crc kubenswrapper[4740]: I0130 15:58:22.334738 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:22 crc kubenswrapper[4740]: E0130 15:58:22.334916 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-krvcv" podUID="7f93a9ce-6677-48e3-9476-c37aa40b6347" Jan 30 15:58:22 crc kubenswrapper[4740]: E0130 15:58:22.335075 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 15:58:23 crc kubenswrapper[4740]: I0130 15:58:23.334770 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:23 crc kubenswrapper[4740]: I0130 15:58:23.335653 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:23 crc kubenswrapper[4740]: E0130 15:58:23.337076 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 15:58:23 crc kubenswrapper[4740]: E0130 15:58:23.337212 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.335186 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.335248 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.338277 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.339164 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.339538 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.341238 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 15:58:24 crc kubenswrapper[4740]: I0130 15:58:24.955323 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 15:58:25 crc kubenswrapper[4740]: I0130 15:58:25.334761 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:25 crc kubenswrapper[4740]: I0130 15:58:25.334776 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:25 crc kubenswrapper[4740]: I0130 15:58:25.338065 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 15:58:25 crc kubenswrapper[4740]: I0130 15:58:25.338253 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.217342 4740 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.275437 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xzvss"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.276465 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.277991 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.278886 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.279936 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.280160 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.281200 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.282655 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dl6xs"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.283490 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rf9jx"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.284072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.284280 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.284830 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.285417 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286098 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286193 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286649 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286944 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.286875 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.287092 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.288176 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.288269 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.288549 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.288728 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.289040 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.289255 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.288180 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.289508 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.289451 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.290609 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.290933 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.292422 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.293455 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.293758 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.294039 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.294342 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.318861 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.319030 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.319091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.319288 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.334858 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.335085 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.337605 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.337781 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.338014 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.338143 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.338167 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.338224 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.338757 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.339179 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.345148 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sg76q"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.345768 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.345840 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.347644 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.347839 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.348063 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.348185 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.350985 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-j2cnv"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.351914 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.352499 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2crr5"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.353377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.353904 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.353954 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.353917 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.354467 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.354878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.354988 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355141 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355245 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355328 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.356471 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355383 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355430 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.355510 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.356731 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.356832 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.357203 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.368524 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vvtsd"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.369009 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.369296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.369388 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.372835 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.379364 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.379569 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.379668 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.379869 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.379978 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.380116 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.380664 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.383525 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9rptf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.383888 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.384093 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.393965 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.394475 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.394848 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.395147 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396284 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-images\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396336 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396423 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-config\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-trusted-ca\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396506 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396560 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m7m8\" (UniqueName: \"kubernetes.io/projected/c4f06d56-e3ce-413c-bbaf-f479d0629867-kube-api-access-2m7m8\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396598 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396713 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-serving-cert\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396742 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7dn\" (UniqueName: \"kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-config\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396812 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbz6\" (UniqueName: \"kubernetes.io/projected/15af21ec-1c9c-46bc-b2be-8efa7628acf8-kube-api-access-hpbz6\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-dir\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.396962 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397002 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397054 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit-dir\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397082 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.398146 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.402442 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pvwn4"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.404754 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.397134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-serving-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.405500 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.405561 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skw6l\" (UniqueName: \"kubernetes.io/projected/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-kube-api-access-skw6l\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.405606 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-client\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.405906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpvvf\" (UniqueName: \"kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.405932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/69ccfcc2-8b4e-489d-8674-e41092950276-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406103 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406150 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzz9\" (UniqueName: \"kubernetes.io/projected/69ccfcc2-8b4e-489d-8674-e41092950276-kube-api-access-bdzz9\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406177 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406209 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-image-import-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-policies\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.406487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.407378 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.408660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-client\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.408723 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.409242 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414172 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43c42cd5-18b5-4430-87c3-67ba872bb44f-serving-cert\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414602 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6xb\" (UniqueName: \"kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-node-pullsecrets\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414673 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-encryption-config\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414696 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkcl\" (UniqueName: \"kubernetes.io/projected/43c42cd5-18b5-4430-87c3-67ba872bb44f-kube-api-access-hqkcl\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414796 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15af21ec-1c9c-46bc-b2be-8efa7628acf8-serving-cert\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414814 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-encryption-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414835 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-config\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/be7f0e88-7c2e-4c1b-a617-9da27584b057-kube-api-access-6glvx\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414958 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414987 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.415011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.415041 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-serving-cert\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.415063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/be7f0e88-7c2e-4c1b-a617-9da27584b057-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.414537 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.415672 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.445585 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.447221 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.448044 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.449473 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.449836 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.450865 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.451197 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.453333 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.453531 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454517 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454536 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454625 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454641 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454714 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454789 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454804 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.454883 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455119 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455166 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455254 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455280 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455335 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455441 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455542 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455623 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455703 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455758 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455851 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455901 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.456001 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.456635 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.455857 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.457138 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.457321 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.457575 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.457663 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.458256 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.458372 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.458807 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.460533 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.462312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.463767 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.467920 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.468324 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.468467 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.472152 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.473106 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.473135 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.473278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.474185 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.481242 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.487448 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.488341 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mscjf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.488974 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.489190 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.489260 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.489999 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.491312 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.492951 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.493811 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.494847 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.495921 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.498142 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.505002 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.505171 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.505371 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522413 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit-dir\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522515 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-serving-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522533 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522557 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skw6l\" (UniqueName: \"kubernetes.io/projected/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-kube-api-access-skw6l\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522584 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-client\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522605 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpvvf\" (UniqueName: \"kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522609 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit-dir\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/69ccfcc2-8b4e-489d-8674-e41092950276-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzz9\" (UniqueName: \"kubernetes.io/projected/69ccfcc2-8b4e-489d-8674-e41092950276-kube-api-access-bdzz9\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522734 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522807 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522834 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-image-import-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-policies\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522885 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522950 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-client\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.522978 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll6xb\" (UniqueName: \"kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523008 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523034 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43c42cd5-18b5-4430-87c3-67ba872bb44f-serving-cert\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523056 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkcl\" (UniqueName: \"kubernetes.io/projected/43c42cd5-18b5-4430-87c3-67ba872bb44f-kube-api-access-hqkcl\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523078 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523115 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-node-pullsecrets\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523137 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-encryption-config\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-service-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-config\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.523977 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/be7f0e88-7c2e-4c1b-a617-9da27584b057-kube-api-access-6glvx\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524055 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15af21ec-1c9c-46bc-b2be-8efa7628acf8-serving-cert\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-encryption-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524146 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524292 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524331 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524448 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-serving-cert\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524478 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/be7f0e88-7c2e-4c1b-a617-9da27584b057-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524664 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-images\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524693 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-config\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-trusted-ca\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.524928 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-image-import-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.525215 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.525928 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-policies\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526743 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526846 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m7m8\" (UniqueName: \"kubernetes.io/projected/c4f06d56-e3ce-413c-bbaf-f479d0629867-kube-api-access-2m7m8\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526917 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-serving-cert\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.526992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7dn\" (UniqueName: \"kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527016 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-config\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbz6\" (UniqueName: \"kubernetes.io/projected/15af21ec-1c9c-46bc-b2be-8efa7628acf8-kube-api-access-hpbz6\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527121 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-dir\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.527952 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.529094 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rf9jx"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.529217 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.530023 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.532393 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.533273 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-serving-ca\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.533532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.534559 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.534816 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.535168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.535563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-config\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.537032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-encryption-config\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.537174 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.537421 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.538545 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-audit\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.540328 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.540437 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-node-pullsecrets\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.541187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.541939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-config\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.542075 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.543862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-etcd-client\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.543990 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be7f0e88-7c2e-4c1b-a617-9da27584b057-images\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.544391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.544669 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.545916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.546307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15af21ec-1c9c-46bc-b2be-8efa7628acf8-serving-cert\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.555294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43c42cd5-18b5-4430-87c3-67ba872bb44f-serving-cert\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.555902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.555963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4f06d56-e3ce-413c-bbaf-f479d0629867-audit-dir\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.556543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.556942 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-serving-cert\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.559302 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.559961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-encryption-config\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.560563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-etcd-client\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.560644 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.560899 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.561305 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.562212 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.562594 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.562813 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.562998 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.564297 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.564734 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15af21ec-1c9c-46bc-b2be-8efa7628acf8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.566524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.566659 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-config\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.566858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.568132 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.568481 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ssmd6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.568746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.569029 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.569047 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/be7f0e88-7c2e-4c1b-a617-9da27584b057-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.569340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43c42cd5-18b5-4430-87c3-67ba872bb44f-trusted-ca\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.569803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.570025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.571828 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4f06d56-e3ce-413c-bbaf-f479d0629867-serving-cert\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.572233 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/69ccfcc2-8b4e-489d-8674-e41092950276-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.572623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.572671 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.573581 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.574543 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xzvss"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.575661 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.576793 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.577766 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.578397 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dl6xs"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.578524 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.579690 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sg76q"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.581040 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-j2cnv"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.582099 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.583420 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.584603 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6x58j"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.585667 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.586239 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.587792 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2crr5"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.588898 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.590249 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.591421 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vvtsd"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.593020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.594357 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.596442 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.598261 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.598404 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.600067 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.601118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.605909 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wdjz2"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.607082 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rqtgp"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.607403 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.608383 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mscjf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.608458 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.609371 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.610390 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.611512 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9rptf"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.612526 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.613526 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.614799 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.615871 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.616899 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wdjz2"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.617917 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.618968 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.619608 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.619955 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.620983 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.621996 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ssmd6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.623082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rqtgp"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.624091 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.625176 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7l62s"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.626073 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.626169 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7l62s"] Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.639185 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.658471 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.682413 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.699602 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.719151 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.740069 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.778998 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.798437 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.819284 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830669 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-serving-cert\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjbp\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-kube-api-access-qcjbp\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdq4\" (UniqueName: \"kubernetes.io/projected/cef47ed2-b13f-4f69-ab97-3665967de31d-kube-api-access-fgdq4\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef47ed2-b13f-4f69-ab97-3665967de31d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.830956 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-config\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831057 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef47ed2-b13f-4f69-ab97-3665967de31d-serving-cert\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831086 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd736e46-6f9b-41ed-9503-1646948ed818-trusted-ca\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96d8j\" (UniqueName: \"kubernetes.io/projected/72d84287-a513-45bd-ada2-57b6e115a754-kube-api-access-96d8j\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831218 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-service-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd736e46-6f9b-41ed-9503-1646948ed818-metrics-tls\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831338 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831409 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831472 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8t6x\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831501 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-etcd-client\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831652 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: E0130 15:58:27.831772 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.331752522 +0000 UTC m=+156.968815371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.831831 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.840172 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.880146 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.899839 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.918945 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.933017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:27 crc kubenswrapper[4740]: E0130 15:58:27.933277 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.433239108 +0000 UTC m=+157.070301707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.933635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5tn\" (UniqueName: \"kubernetes.io/projected/dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0-kube-api-access-vr5tn\") pod \"downloads-7954f5f757-vvtsd\" (UID: \"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0\") " pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.933835 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db18118-120c-43d1-a71a-281f8c7a0adf-config\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgdq4\" (UniqueName: \"kubernetes.io/projected/cef47ed2-b13f-4f69-ab97-3665967de31d-kube-api-access-fgdq4\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934746 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-config\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934834 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934886 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f87d70-f314-4419-be49-f97060083a68-metrics-tls\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.934968 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef47ed2-b13f-4f69-ab97-3665967de31d-serving-cert\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935072 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72d7s\" (UniqueName: \"kubernetes.io/projected/26373528-9c79-419c-a68b-8cce50827fd5-kube-api-access-72d7s\") pod \"migrator-59844c95c7-d4zn6\" (UID: \"26373528-9c79-419c-a68b-8cce50827fd5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935234 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-webhook-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935597 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-registration-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935649 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935930 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-metrics-certs\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.935984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-auth-proxy-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936024 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936048 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-apiservice-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936109 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936236 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936419 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7wp\" (UniqueName: \"kubernetes.io/projected/771ae644-7090-4bf7-915c-142ca8c5e982-kube-api-access-2d7wp\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:27 crc kubenswrapper[4740]: E0130 15:58:27.936436 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.436416517 +0000 UTC m=+157.073479306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4sft\" (UniqueName: \"kubernetes.io/projected/dce7e8a0-f532-4564-9e0b-771e10667429-kube-api-access-d4sft\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-mountpoint-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8t6x\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-etcd-client\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db18118-120c-43d1-a71a-281f8c7a0adf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqdb\" (UniqueName: \"kubernetes.io/projected/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-kube-api-access-mjqdb\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724pn\" (UniqueName: \"kubernetes.io/projected/ee70f092-28be-470d-961b-0c777d465523-kube-api-access-724pn\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936964 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.936988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937010 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhtv\" (UniqueName: \"kubernetes.io/projected/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-kube-api-access-drhtv\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4z4z\" (UniqueName: \"kubernetes.io/projected/d35d5638-daa3-4829-bae0-449278a71719-kube-api-access-w4z4z\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-stats-auth\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937127 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-config\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937175 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937412 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-socket-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937544 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937670 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-images\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937813 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b82\" (UniqueName: \"kubernetes.io/projected/da394642-26c8-4d31-8a0b-f49a357dbeda-kube-api-access-t6b82\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937887 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdkk\" (UniqueName: \"kubernetes.io/projected/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-kube-api-access-7rdkk\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937943 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsflg\" (UniqueName: \"kubernetes.io/projected/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-kube-api-access-fsflg\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.937985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-proxy-tls\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938814 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-serving-cert\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938873 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.938994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d774ab00-3113-4ad1-8de8-b66fc0b31b15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939052 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce7e8a0-f532-4564-9e0b-771e10667429-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939274 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4b5b\" (UniqueName: \"kubernetes.io/projected/a0957754-07ca-49e4-93ee-aabf49ce5578-kube-api-access-h4b5b\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-profile-collector-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939452 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef47ed2-b13f-4f69-ab97-3665967de31d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.939547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce7e8a0-f532-4564-9e0b-771e10667429-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70f092-28be-470d-961b-0c777d465523-service-ca-bundle\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd736e46-6f9b-41ed-9503-1646948ed818-trusted-ca\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63559dc-58c0-452c-9629-a4f63f4b4463-config\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940107 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nd2\" (UniqueName: \"kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cef47ed2-b13f-4f69-ab97-3665967de31d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96d8j\" (UniqueName: \"kubernetes.io/projected/72d84287-a513-45bd-ada2-57b6e115a754-kube-api-access-96d8j\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnms\" (UniqueName: \"kubernetes.io/projected/efdc514b-cd12-4784-951b-8c0b2878dd02-kube-api-access-glnms\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/771ae644-7090-4bf7-915c-142ca8c5e982-tmpfs\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940641 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-service-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.940933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd736e46-6f9b-41ed-9503-1646948ed818-metrics-tls\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941311 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63559dc-58c0-452c-9629-a4f63f4b4463-serving-cert\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941582 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-etcd-client\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941716 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-service-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941801 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-plugins-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.941959 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-srv-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942036 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942079 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvz8\" (UniqueName: \"kubernetes.io/projected/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-kube-api-access-ssvz8\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942521 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942568 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-default-certificate\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5ds\" (UniqueName: \"kubernetes.io/projected/74fcc367-6e97-4c45-83ec-d3257c125bff-kube-api-access-xm5ds\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942658 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbczk\" (UniqueName: \"kubernetes.io/projected/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-kube-api-access-sbczk\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942718 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgft\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-kube-api-access-tjgft\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942747 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzn2\" (UniqueName: \"kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942776 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1232da99-b822-4ac9-8def-d246aacd1df6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q462l\" (UniqueName: \"kubernetes.io/projected/0336ee48-8f1e-49ed-a021-a01446330b39-kube-api-access-q462l\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942857 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-csi-data-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/d63559dc-58c0-452c-9629-a4f63f4b4463-kube-api-access-629th\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.942979 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-srv-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/efdc514b-cd12-4784-951b-8c0b2878dd02-machine-approver-tls\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943193 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrb9\" (UniqueName: \"kubernetes.io/projected/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-kube-api-access-mtrb9\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943266 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d35d5638-daa3-4829-bae0-449278a71719-proxy-tls\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd736e46-6f9b-41ed-9503-1646948ed818-trusted-ca\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943475 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943550 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c789\" (UniqueName: \"kubernetes.io/projected/43f87d70-f314-4419-be49-f97060083a68-kube-api-access-5c789\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943677 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d35d5638-daa3-4829-bae0-449278a71719-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vhj\" (UniqueName: \"kubernetes.io/projected/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-kube-api-access-p8vhj\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943811 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1232da99-b822-4ac9-8def-d246aacd1df6-config\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943926 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1232da99-b822-4ac9-8def-d246aacd1df6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.943976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944146 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvhl\" (UniqueName: \"kubernetes.io/projected/47f6c1f8-2688-44c9-928b-38c58a101de0-kube-api-access-bkvhl\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjbp\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-kube-api-access-qcjbp\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d774ab00-3113-4ad1-8de8-b66fc0b31b15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944637 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db18118-120c-43d1-a71a-281f8c7a0adf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944673 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjd6\" (UniqueName: \"kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.944526 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/72d84287-a513-45bd-ada2-57b6e115a754-etcd-ca\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.945729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd736e46-6f9b-41ed-9503-1646948ed818-metrics-tls\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.945887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cef47ed2-b13f-4f69-ab97-3665967de31d-serving-cert\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.947625 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d84287-a513-45bd-ada2-57b6e115a754-serving-cert\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.960312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 15:58:27 crc kubenswrapper[4740]: I0130 15:58:27.980162 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.006531 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.018875 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.039142 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046428 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.046554 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.546528228 +0000 UTC m=+157.183590827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b82\" (UniqueName: \"kubernetes.io/projected/da394642-26c8-4d31-8a0b-f49a357dbeda-kube-api-access-t6b82\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046763 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-images\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046785 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rdkk\" (UniqueName: \"kubernetes.io/projected/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-kube-api-access-7rdkk\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046811 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsflg\" (UniqueName: \"kubernetes.io/projected/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-kube-api-access-fsflg\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046835 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-proxy-tls\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046891 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046948 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d774ab00-3113-4ad1-8de8-b66fc0b31b15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.046974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce7e8a0-f532-4564-9e0b-771e10667429-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047761 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4b5b\" (UniqueName: \"kubernetes.io/projected/a0957754-07ca-49e4-93ee-aabf49ce5578-kube-api-access-h4b5b\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047803 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-profile-collector-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce7e8a0-f532-4564-9e0b-771e10667429-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70f092-28be-470d-961b-0c777d465523-service-ca-bundle\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047914 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63559dc-58c0-452c-9629-a4f63f4b4463-config\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047972 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nd2\" (UniqueName: \"kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnms\" (UniqueName: \"kubernetes.io/projected/efdc514b-cd12-4784-951b-8c0b2878dd02-kube-api-access-glnms\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048078 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/771ae644-7090-4bf7-915c-142ca8c5e982-tmpfs\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63559dc-58c0-452c-9629-a4f63f4b4463-serving-cert\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048226 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-plugins-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.047800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-images\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-srv-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048436 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048460 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvz8\" (UniqueName: \"kubernetes.io/projected/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-kube-api-access-ssvz8\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048515 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048539 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-default-certificate\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048579 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048608 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5ds\" (UniqueName: \"kubernetes.io/projected/74fcc367-6e97-4c45-83ec-d3257c125bff-kube-api-access-xm5ds\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgft\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-kube-api-access-tjgft\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzn2\" (UniqueName: \"kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1232da99-b822-4ac9-8def-d246aacd1df6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q462l\" (UniqueName: \"kubernetes.io/projected/0336ee48-8f1e-49ed-a021-a01446330b39-kube-api-access-q462l\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048757 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbczk\" (UniqueName: \"kubernetes.io/projected/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-kube-api-access-sbczk\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-csi-data-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048801 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/d63559dc-58c0-452c-9629-a4f63f4b4463-kube-api-access-629th\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce7e8a0-f532-4564-9e0b-771e10667429-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrb9\" (UniqueName: \"kubernetes.io/projected/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-kube-api-access-mtrb9\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048852 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d35d5638-daa3-4829-bae0-449278a71719-proxy-tls\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-plugins-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-srv-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/efdc514b-cd12-4784-951b-8c0b2878dd02-machine-approver-tls\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049021 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee70f092-28be-470d-961b-0c777d465523-service-ca-bundle\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049027 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.048244 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c789\" (UniqueName: \"kubernetes.io/projected/43f87d70-f314-4419-be49-f97060083a68-kube-api-access-5c789\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d35d5638-daa3-4829-bae0-449278a71719-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vhj\" (UniqueName: \"kubernetes.io/projected/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-kube-api-access-p8vhj\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049888 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049927 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/771ae644-7090-4bf7-915c-142ca8c5e982-tmpfs\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-csi-data-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050643 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d35d5638-daa3-4829-bae0-449278a71719-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.049296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1232da99-b822-4ac9-8def-d246aacd1df6-config\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050713 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1232da99-b822-4ac9-8def-d246aacd1df6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvhl\" (UniqueName: \"kubernetes.io/projected/47f6c1f8-2688-44c9-928b-38c58a101de0-kube-api-access-bkvhl\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050960 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d774ab00-3113-4ad1-8de8-b66fc0b31b15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.050993 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db18118-120c-43d1-a71a-281f8c7a0adf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051020 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjjd6\" (UniqueName: \"kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5tn\" (UniqueName: \"kubernetes.io/projected/dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0-kube-api-access-vr5tn\") pod \"downloads-7954f5f757-vvtsd\" (UID: \"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0\") " pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db18118-120c-43d1-a71a-281f8c7a0adf-config\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f87d70-f314-4419-be49-f97060083a68-metrics-tls\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051244 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051271 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72d7s\" (UniqueName: \"kubernetes.io/projected/26373528-9c79-419c-a68b-8cce50827fd5-kube-api-access-72d7s\") pod \"migrator-59844c95c7-d4zn6\" (UID: \"26373528-9c79-419c-a68b-8cce50827fd5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-webhook-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051360 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce7e8a0-f532-4564-9e0b-771e10667429-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-registration-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051823 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051857 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-registration-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.052219 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.052686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.051857 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.052767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-metrics-certs\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.052867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-auth-proxy-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.053081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.052976 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d774ab00-3113-4ad1-8de8-b66fc0b31b15-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.053113 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-apiservice-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.053042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-default-certificate\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdc514b-cd12-4784-951b-8c0b2878dd02-auth-proxy-config\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d774ab00-3113-4ad1-8de8-b66fc0b31b15-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054464 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d7wp\" (UniqueName: \"kubernetes.io/projected/771ae644-7090-4bf7-915c-142ca8c5e982-kube-api-access-2d7wp\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054494 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4sft\" (UniqueName: \"kubernetes.io/projected/dce7e8a0-f532-4564-9e0b-771e10667429-kube-api-access-d4sft\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054582 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054617 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-mountpoint-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db18118-120c-43d1-a71a-281f8c7a0adf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqdb\" (UniqueName: \"kubernetes.io/projected/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-kube-api-access-mjqdb\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724pn\" (UniqueName: \"kubernetes.io/projected/ee70f092-28be-470d-961b-0c777d465523-kube-api-access-724pn\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054816 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054848 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhtv\" (UniqueName: \"kubernetes.io/projected/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-kube-api-access-drhtv\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4z4z\" (UniqueName: \"kubernetes.io/projected/d35d5638-daa3-4829-bae0-449278a71719-kube-api-access-w4z4z\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054907 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-stats-auth\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054937 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-socket-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.054996 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.055029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.055054 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.055085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.055115 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.056529 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.556481315 +0000 UTC m=+157.193543954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.056587 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-mountpoint-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.056929 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74fcc367-6e97-4c45-83ec-d3257c125bff-socket-dir\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.057441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.060603 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.068510 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43f87d70-f314-4419-be49-f97060083a68-metrics-tls\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.068651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.070097 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.070103 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/efdc514b-cd12-4784-951b-8c0b2878dd02-machine-approver-tls\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.070206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-stats-auth\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.072461 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee70f092-28be-470d-961b-0c777d465523-metrics-certs\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.072849 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-proxy-tls\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.073307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.080237 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.093210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d35d5638-daa3-4829-bae0-449278a71719-proxy-tls\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.098740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.119078 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.139436 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.148017 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.156628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.156817 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.656786252 +0000 UTC m=+157.293848881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.157690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.158530 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.658486874 +0000 UTC m=+157.295549473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.158693 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.166303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.180788 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.199279 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.219837 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.239670 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.258954 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.259175 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.759144779 +0000 UTC m=+157.396207408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.260107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.260152 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.261183 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.761137439 +0000 UTC m=+157.398200218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.280267 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.286953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1232da99-b822-4ac9-8def-d246aacd1df6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.298987 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.301983 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1232da99-b822-4ac9-8def-d246aacd1df6-config\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.320631 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.335154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.339811 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.349110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.360082 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.362018 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.362423 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.862338217 +0000 UTC m=+157.499400856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.363047 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.363548 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.863531777 +0000 UTC m=+157.500594416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.380053 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.399404 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.419684 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.435755 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-srv-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.439573 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.447951 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.451072 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.453196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/47f6c1f8-2688-44c9-928b-38c58a101de0-profile-collector-cert\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.459339 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.461085 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.464097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.464323 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.964279984 +0000 UTC m=+157.601342623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.464572 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.465211 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:28.965167166 +0000 UTC m=+157.602229775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.479572 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.500436 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.506617 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-webhook-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.510580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/771ae644-7090-4bf7-915c-142ca8c5e982-apiservice-cert\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.537154 4740 request.go:700] Waited for 1.007561125s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.539292 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll6xb\" (UniqueName: \"kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb\") pod \"route-controller-manager-6576b87f9c-s6nlr\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.539574 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.560723 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.566183 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.567134 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.067116744 +0000 UTC m=+157.704179353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.583374 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.598367 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.604905 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.625926 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.633406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.639724 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.659525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.668523 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.669061 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.16903847 +0000 UTC m=+157.806101089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.680277 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.683466 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skw6l\" (UniqueName: \"kubernetes.io/projected/fef3a2ff-8e8b-4e93-80b7-bd7b0249e223-kube-api-access-skw6l\") pod \"apiserver-76f77b778f-xzvss\" (UID: \"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223\") " pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.700086 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.706321 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9db18118-120c-43d1-a71a-281f8c7a0adf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.746579 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glvx\" (UniqueName: \"kubernetes.io/projected/be7f0e88-7c2e-4c1b-a617-9da27584b057-kube-api-access-6glvx\") pod \"machine-api-operator-5694c8668f-dl6xs\" (UID: \"be7f0e88-7c2e-4c1b-a617-9da27584b057\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.752946 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkcl\" (UniqueName: \"kubernetes.io/projected/43c42cd5-18b5-4430-87c3-67ba872bb44f-kube-api-access-hqkcl\") pod \"console-operator-58897d9998-sg76q\" (UID: \"43c42cd5-18b5-4430-87c3-67ba872bb44f\") " pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.770036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.770313 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.27026665 +0000 UTC m=+157.907329259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.770497 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.771468 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.271434329 +0000 UTC m=+157.908496958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.787195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m7m8\" (UniqueName: \"kubernetes.io/projected/c4f06d56-e3ce-413c-bbaf-f479d0629867-kube-api-access-2m7m8\") pod \"apiserver-7bbb656c7d-vc2l4\" (UID: \"c4f06d56-e3ce-413c-bbaf-f479d0629867\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.797262 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7dn\" (UniqueName: \"kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn\") pod \"oauth-openshift-558db77b4-rf9jx\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.816317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.818884 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbz6\" (UniqueName: \"kubernetes.io/projected/15af21ec-1c9c-46bc-b2be-8efa7628acf8-kube-api-access-hpbz6\") pod \"authentication-operator-69f744f599-j2cnv\" (UID: \"15af21ec-1c9c-46bc-b2be-8efa7628acf8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.819818 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.827548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db18118-120c-43d1-a71a-281f8c7a0adf-config\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.840425 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.856605 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-srv-cert\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.871550 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.872711 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.372686048 +0000 UTC m=+158.009748687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.873277 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.880010 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzz9\" (UniqueName: \"kubernetes.io/projected/69ccfcc2-8b4e-489d-8674-e41092950276-kube-api-access-bdzz9\") pod \"cluster-samples-operator-665b6dd947-97t7f\" (UID: \"69ccfcc2-8b4e-489d-8674-e41092950276\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.894281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpvvf\" (UniqueName: \"kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf\") pod \"controller-manager-879f6c89f-fjvms\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.900039 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.903920 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.920374 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.925669 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.931389 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.938728 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.942873 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.959230 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.974394 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:28 crc kubenswrapper[4740]: E0130 15:58:28.974871 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.474845381 +0000 UTC m=+158.111907990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.979219 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.980202 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d63559dc-58c0-452c-9629-a4f63f4b4463-config\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.982584 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.983624 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.991491 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:28 crc kubenswrapper[4740]: I0130 15:58:28.999230 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.008093 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xzvss"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.018950 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.031958 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d63559dc-58c0-452c-9629-a4f63f4b4463-serving-cert\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.038542 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.048255 4740 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.048438 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume podName:da394642-26c8-4d31-8a0b-f49a357dbeda nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.548399881 +0000 UTC m=+158.185462490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume") pod "dns-default-wdjz2" (UID: "da394642-26c8-4d31-8a0b-f49a357dbeda") : failed to sync configmap cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.049749 4740 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.049880 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert podName:87d90fd3-1a3e-407f-9c35-f6cfd6b01108 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.549846407 +0000 UTC m=+158.186909006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert") pod "ingress-canary-7l62s" (UID: "87d90fd3-1a3e-407f-9c35-f6cfd6b01108") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.049936 4740 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.050035 4740 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.050079 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls podName:da394642-26c8-4d31-8a0b-f49a357dbeda nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.550043822 +0000 UTC m=+158.187106431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls") pod "dns-default-wdjz2" (UID: "da394642-26c8-4d31-8a0b-f49a357dbeda") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.050111 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs podName:3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.550095974 +0000 UTC m=+158.187158583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs") pod "machine-config-server-6x58j" (UID: "3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.052846 4740 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.053007 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls podName:0336ee48-8f1e-49ed-a021-a01446330b39 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.552979435 +0000 UTC m=+158.190042044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-lpktp" (UID: "0336ee48-8f1e-49ed-a021-a01446330b39") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.055444 4740 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.055553 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token podName:3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.555529349 +0000 UTC m=+158.192591958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token") pod "machine-config-server-6x58j" (UID: "3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.056651 4740 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.056717 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle podName:a0957754-07ca-49e4-93ee-aabf49ce5578 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.556702588 +0000 UTC m=+158.193765197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle") pod "service-ca-9c57cc56f-ssmd6" (UID: "a0957754-07ca-49e4-93ee-aabf49ce5578") : failed to sync configmap cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.057259 4740 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.057323 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key podName:a0957754-07ca-49e4-93ee-aabf49ce5578 nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.557309123 +0000 UTC m=+158.194371732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key") pod "service-ca-9c57cc56f-ssmd6" (UID: "a0957754-07ca-49e4-93ee-aabf49ce5578") : failed to sync secret cache: timed out waiting for the condition Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.058091 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.076722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.076888 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.57685824 +0000 UTC m=+158.213920839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.077201 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.077729 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.577718461 +0000 UTC m=+158.214781060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.078051 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.098328 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.099480 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.119511 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.140608 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.147532 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.161684 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.179403 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.180669 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.680646333 +0000 UTC m=+158.317708942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.180961 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.204026 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.219091 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.240141 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.259207 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.294511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.295103 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.795087831 +0000 UTC m=+158.432150430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.295840 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.307965 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.326696 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.343812 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.364284 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.380115 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.395404 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.395609 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.895567942 +0000 UTC m=+158.532630541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.395896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.396497 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.896480014 +0000 UTC m=+158.533542613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.398878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.419018 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.439277 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.440394 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" event={"ID":"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223","Type":"ContainerStarted","Data":"2fc3596a686455b0bb4cfa1b29eef88f18ba5a6029b3199203b3b8c03bb6cbf4"} Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.444633 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" event={"ID":"1430672f-603b-4f60-bb2a-e95cd48a56c2","Type":"ContainerStarted","Data":"27d9624993a8b61f97e83d7bda8bd817ffb7a5baf6ad146ff679531541272f76"} Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.444667 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" event={"ID":"1430672f-603b-4f60-bb2a-e95cd48a56c2","Type":"ContainerStarted","Data":"9c45326abd891c8b0f2db045df144b90a78edf013f0941c5322d96be684b335e"} Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.445695 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.447489 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6nlr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.447605 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.458930 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.479084 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.498113 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.498330 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.998289928 +0000 UTC m=+158.635352527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.498551 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.498944 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:29.998929464 +0000 UTC m=+158.635992063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.499090 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.506903 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rf9jx"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.519509 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.537799 4740 request.go:700] Waited for 1.675844917s due to client-side throttling, not priority and fairness, request: PATCH:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-rf9jx/status Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.575126 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.582503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgdq4\" (UniqueName: \"kubernetes.io/projected/cef47ed2-b13f-4f69-ab97-3665967de31d-kube-api-access-fgdq4\") pod \"openshift-config-operator-7777fb866f-g9wqg\" (UID: \"cef47ed2-b13f-4f69-ab97-3665967de31d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.594771 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8t6x\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.601598 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.601796 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.101762253 +0000 UTC m=+158.738824852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602028 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602238 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602257 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602451 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.602535 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.605812 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-certs\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.611916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0336ee48-8f1e-49ed-a021-a01446330b39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.614422 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.114391658 +0000 UTC m=+158.751454257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.615464 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-cert\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.615984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da394642-26c8-4d31-8a0b-f49a357dbeda-config-volume\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.616245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.616740 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.617496 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da394642-26c8-4d31-8a0b-f49a357dbeda-metrics-tls\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.622314 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-node-bootstrap-token\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.628839 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sg76q"] Jan 30 15:58:29 crc kubenswrapper[4740]: W0130 15:58:29.630276 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f06d56_e3ce_413c_bbaf_f479d0629867.slice/crio-4c43c77f39317f3cc9a021304196b2816b0303cd41f0b512cf6e56c3c03a14b2 WatchSource:0}: Error finding container 4c43c77f39317f3cc9a021304196b2816b0303cd41f0b512cf6e56c3c03a14b2: Status 404 returned error can't find the container with id 4c43c77f39317f3cc9a021304196b2816b0303cd41f0b512cf6e56c3c03a14b2 Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.632296 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96d8j\" (UniqueName: \"kubernetes.io/projected/72d84287-a513-45bd-ada2-57b6e115a754-kube-api-access-96d8j\") pod \"etcd-operator-b45778765-2crr5\" (UID: \"72d84287-a513-45bd-ada2-57b6e115a754\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.634092 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" Jan 30 15:58:29 crc kubenswrapper[4740]: W0130 15:58:29.644558 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c42cd5_18b5_4430_87c3_67ba872bb44f.slice/crio-381e8ccafc0498bce06193b2a7a30c3918116ed31f9ff7884dad2d3e7e6be83f WatchSource:0}: Error finding container 381e8ccafc0498bce06193b2a7a30c3918116ed31f9ff7884dad2d3e7e6be83f: Status 404 returned error can't find the container with id 381e8ccafc0498bce06193b2a7a30c3918116ed31f9ff7884dad2d3e7e6be83f Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.650672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-cabundle\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.650889 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a0957754-07ca-49e4-93ee-aabf49ce5578-signing-key\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.655254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.675735 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjbp\" (UniqueName: \"kubernetes.io/projected/cd736e46-6f9b-41ed-9503-1646948ed818-kube-api-access-qcjbp\") pod \"ingress-operator-5b745b69d9-mqdkd\" (UID: \"cd736e46-6f9b-41ed-9503-1646948ed818\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.695147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b82\" (UniqueName: \"kubernetes.io/projected/da394642-26c8-4d31-8a0b-f49a357dbeda-kube-api-access-t6b82\") pod \"dns-default-wdjz2\" (UID: \"da394642-26c8-4d31-8a0b-f49a357dbeda\") " pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.703956 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.704718 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.204644724 +0000 UTC m=+158.841707373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.705102 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.705715 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.205683869 +0000 UTC m=+158.842746669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.707114 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.723136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rdkk\" (UniqueName: \"kubernetes.io/projected/e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e-kube-api-access-7rdkk\") pod \"package-server-manager-789f6589d5-npf9j\" (UID: \"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.734704 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsflg\" (UniqueName: \"kubernetes.io/projected/87d90fd3-1a3e-407f-9c35-f6cfd6b01108-kube-api-access-fsflg\") pod \"ingress-canary-7l62s\" (UID: \"87d90fd3-1a3e-407f-9c35-f6cfd6b01108\") " pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.787181 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nd2\" (UniqueName: \"kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2\") pod \"marketplace-operator-79b997595-mjf5j\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.792958 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dl6xs"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.798001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4b5b\" (UniqueName: \"kubernetes.io/projected/a0957754-07ca-49e4-93ee-aabf49ce5578-kube-api-access-h4b5b\") pod \"service-ca-9c57cc56f-ssmd6\" (UID: \"a0957754-07ca-49e4-93ee-aabf49ce5578\") " pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.801798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnms\" (UniqueName: \"kubernetes.io/projected/efdc514b-cd12-4784-951b-8c0b2878dd02-kube-api-access-glnms\") pod \"machine-approver-56656f9798-sp4zn\" (UID: \"efdc514b-cd12-4784-951b-8c0b2878dd02\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.805040 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-j2cnv"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.805098 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.806404 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.806559 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.306527198 +0000 UTC m=+158.943589797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: W0130 15:58:29.810537 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe7f0e88_7c2e_4c1b_a617_9da27584b057.slice/crio-409edc0db5af93f516852533a03a998e531a40c3a714a1fbdb1bb7ba29bce732 WatchSource:0}: Error finding container 409edc0db5af93f516852533a03a998e531a40c3a714a1fbdb1bb7ba29bce732: Status 404 returned error can't find the container with id 409edc0db5af93f516852533a03a998e531a40c3a714a1fbdb1bb7ba29bce732 Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.811328 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.812535 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.312516887 +0000 UTC m=+158.949579486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.815790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c789\" (UniqueName: \"kubernetes.io/projected/43f87d70-f314-4419-be49-f97060083a68-kube-api-access-5c789\") pod \"dns-operator-744455d44c-9rptf\" (UID: \"43f87d70-f314-4419-be49-f97060083a68\") " pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:29 crc kubenswrapper[4740]: W0130 15:58:29.818000 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78eb2bf3_1af4_4efd_8ce0_733dded1dcaf.slice/crio-07f8e2d8393a9a77917e2604ab7da96b8f13c64c1a3822e398f3e3cf8a237187 WatchSource:0}: Error finding container 07f8e2d8393a9a77917e2604ab7da96b8f13c64c1a3822e398f3e3cf8a237187: Status 404 returned error can't find the container with id 07f8e2d8393a9a77917e2604ab7da96b8f13c64c1a3822e398f3e3cf8a237187 Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.822915 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.834670 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5ds\" (UniqueName: \"kubernetes.io/projected/74fcc367-6e97-4c45-83ec-d3257c125bff-kube-api-access-xm5ds\") pod \"csi-hostpathplugin-rqtgp\" (UID: \"74fcc367-6e97-4c45-83ec-d3257c125bff\") " pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.860317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvz8\" (UniqueName: \"kubernetes.io/projected/ac1f6214-eb5a-4aef-81e2-4a513de6fef3-kube-api-access-ssvz8\") pod \"machine-config-operator-74547568cd-q42cf\" (UID: \"ac1f6214-eb5a-4aef-81e2-4a513de6fef3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.876747 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.891091 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.896817 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2crr5"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.898368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgft\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-kube-api-access-tjgft\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.912175 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:29 crc kubenswrapper[4740]: E0130 15:58:29.912635 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.412592258 +0000 UTC m=+159.049654857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.919703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/d63559dc-58c0-452c-9629-a4f63f4b4463-kube-api-access-629th\") pod \"service-ca-operator-777779d784-5mrzj\" (UID: \"d63559dc-58c0-452c-9629-a4f63f4b4463\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.920193 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.928548 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.932758 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzn2\" (UniqueName: \"kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2\") pod \"collect-profiles-29496465-r7p7t\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.954580 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1232da99-b822-4ac9-8def-d246aacd1df6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cb2qj\" (UID: \"1232da99-b822-4ac9-8def-d246aacd1df6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.962111 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.972067 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg"] Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.985835 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.988735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7l62s" Jan 30 15:58:29 crc kubenswrapper[4740]: I0130 15:58:29.993196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbczk\" (UniqueName: \"kubernetes.io/projected/3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf-kube-api-access-sbczk\") pod \"machine-config-server-6x58j\" (UID: \"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf\") " pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.000829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrb9\" (UniqueName: \"kubernetes.io/projected/f3c5a7f7-b993-459b-8f88-83b6861e4bb4-kube-api-access-mtrb9\") pod \"openshift-apiserver-operator-796bbdcf4f-hcwm5\" (UID: \"f3c5a7f7-b993-459b-8f88-83b6861e4bb4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.008766 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b31422a-a539-4d2d-ba7b-0b7cffd27bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g6gsm\" (UID: \"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.013473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.013884 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.513869518 +0000 UTC m=+159.150932107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.020469 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q462l\" (UniqueName: \"kubernetes.io/projected/0336ee48-8f1e-49ed-a021-a01446330b39-kube-api-access-q462l\") pod \"control-plane-machine-set-operator-78cbb6b69f-lpktp\" (UID: \"0336ee48-8f1e-49ed-a021-a01446330b39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.047085 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.047494 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vhj\" (UniqueName: \"kubernetes.io/projected/d14606d1-1dc6-4ec7-a1e4-6eabc01b5548-kube-api-access-p8vhj\") pod \"olm-operator-6b444d44fb-5cbjc\" (UID: \"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.055617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.061951 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.062561 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.066000 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5tn\" (UniqueName: \"kubernetes.io/projected/dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0-kube-api-access-vr5tn\") pod \"downloads-7954f5f757-vvtsd\" (UID: \"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0\") " pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.075251 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjjd6\" (UniqueName: \"kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6\") pod \"console-f9d7485db-5q9nt\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.085831 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.093294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72d7s\" (UniqueName: \"kubernetes.io/projected/26373528-9c79-419c-a68b-8cce50827fd5-kube-api-access-72d7s\") pod \"migrator-59844c95c7-d4zn6\" (UID: \"26373528-9c79-419c-a68b-8cce50827fd5\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.114889 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.119039 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvhl\" (UniqueName: \"kubernetes.io/projected/47f6c1f8-2688-44c9-928b-38c58a101de0-kube-api-access-bkvhl\") pod \"catalog-operator-68c6474976-tgvth\" (UID: \"47f6c1f8-2688-44c9-928b-38c58a101de0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.119992 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.619946818 +0000 UTC m=+159.257009587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.135994 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.141523 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d7wp\" (UniqueName: \"kubernetes.io/projected/771ae644-7090-4bf7-915c-142ca8c5e982-kube-api-access-2d7wp\") pod \"packageserver-d55dfcdfc-7dgp6\" (UID: \"771ae644-7090-4bf7-915c-142ca8c5e982\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.161673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4sft\" (UniqueName: \"kubernetes.io/projected/dce7e8a0-f532-4564-9e0b-771e10667429-kube-api-access-d4sft\") pod \"openshift-controller-manager-operator-756b6f6bc6-bdmzk\" (UID: \"dce7e8a0-f532-4564-9e0b-771e10667429\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.162157 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.175881 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.181894 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4z4z\" (UniqueName: \"kubernetes.io/projected/d35d5638-daa3-4829-bae0-449278a71719-kube-api-access-w4z4z\") pod \"machine-config-controller-84d6567774-k6bfq\" (UID: \"d35d5638-daa3-4829-bae0-449278a71719\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.189413 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.196806 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724pn\" (UniqueName: \"kubernetes.io/projected/ee70f092-28be-470d-961b-0c777d465523-kube-api-access-724pn\") pod \"router-default-5444994796-pvwn4\" (UID: \"ee70f092-28be-470d-961b-0c777d465523\") " pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.207271 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.217083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.217713 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.717692841 +0000 UTC m=+159.354755440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.219230 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.224775 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqdb\" (UniqueName: \"kubernetes.io/projected/f4a1a69b-6c27-4b5c-95ed-ea05e85bee50-kube-api-access-mjqdb\") pod \"kube-storage-version-migrator-operator-b67b599dd-cnnxt\" (UID: \"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.242367 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.246396 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.252074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9db18118-120c-43d1-a71a-281f8c7a0adf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4mghk\" (UID: \"9db18118-120c-43d1-a71a-281f8c7a0adf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.254937 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6x58j" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.276340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d774ab00-3113-4ad1-8de8-b66fc0b31b15-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-z48p4\" (UID: \"d774ab00-3113-4ad1-8de8-b66fc0b31b15\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.277100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhtv\" (UniqueName: \"kubernetes.io/projected/30b7b1ea-4fad-47ff-8278-6d1e3f256b51-kube-api-access-drhtv\") pod \"multus-admission-controller-857f4d67dd-mscjf\" (UID: \"30b7b1ea-4fad-47ff-8278-6d1e3f256b51\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.314226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.318017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.318483 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.818453539 +0000 UTC m=+159.455516128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.324978 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.330165 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.337480 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.377061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.379367 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wdjz2"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.398047 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.411932 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.420145 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.420697 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:30.920677523 +0000 UTC m=+159.557740122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.449629 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.475411 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7l62s"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.495782 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" event={"ID":"cef47ed2-b13f-4f69-ab97-3665967de31d","Type":"ContainerStarted","Data":"b3c7ceb59b4dd2999a1ec63790f63efde7e7831e0748c39abb01430b010b2a8d"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.497795 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.498959 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.504379 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" event={"ID":"72d84287-a513-45bd-ada2-57b6e115a754","Type":"ContainerStarted","Data":"da5a62a5f32be838728555acf247fa9aa633ccb1183a39c86520ccbfa5640409"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.515231 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" event={"ID":"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf","Type":"ContainerStarted","Data":"3376d873a7cd5e6892cc30d9929763db9ef0a7c2c511a465a043315c74c60e58"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.515277 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" event={"ID":"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf","Type":"ContainerStarted","Data":"07f8e2d8393a9a77917e2604ab7da96b8f13c64c1a3822e398f3e3cf8a237187"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.516606 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.522722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.522884 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.022838755 +0000 UTC m=+159.659901354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.523039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.523437 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.02342097 +0000 UTC m=+159.660483559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.530310 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.530398 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.531238 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" event={"ID":"be7f0e88-7c2e-4c1b-a617-9da27584b057","Type":"ContainerStarted","Data":"19b483ffd84abc5aad9fcb609b58b85bd2383048cb60c65187ca982233eeba6e"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.531319 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" event={"ID":"be7f0e88-7c2e-4c1b-a617-9da27584b057","Type":"ContainerStarted","Data":"409edc0db5af93f516852533a03a998e531a40c3a714a1fbdb1bb7ba29bce732"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.544938 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sg76q" event={"ID":"43c42cd5-18b5-4430-87c3-67ba872bb44f","Type":"ContainerStarted","Data":"82ec217c2c3d69361e991ba034b37bc3f8ade119e2311b07c11c1ab0f815bade"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.544989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sg76q" event={"ID":"43c42cd5-18b5-4430-87c3-67ba872bb44f","Type":"ContainerStarted","Data":"381e8ccafc0498bce06193b2a7a30c3918116ed31f9ff7884dad2d3e7e6be83f"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.546998 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.564268 4740 patch_prober.go:28] interesting pod/console-operator-58897d9998-sg76q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.564335 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sg76q" podUID="43c42cd5-18b5-4430-87c3-67ba872bb44f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.564798 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.587665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" event={"ID":"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e","Type":"ContainerStarted","Data":"c3995463aae883a4c4d27f07c1dd9111ea46b3c94a28546f7da3dd39d956d1f6"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.590730 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" event={"ID":"69ccfcc2-8b4e-489d-8674-e41092950276","Type":"ContainerStarted","Data":"03ea2df1fa17609ebac475c4f3f98edc7716c0080bee8a6addd9ce85da76dc0c"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.593709 4740 generic.go:334] "Generic (PLEG): container finished" podID="c4f06d56-e3ce-413c-bbaf-f479d0629867" containerID="960a1a4f5c28203a9e0885750e7e0b5fa0e596f6282da9815adde6115a52f516" exitCode=0 Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.593824 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" event={"ID":"c4f06d56-e3ce-413c-bbaf-f479d0629867","Type":"ContainerDied","Data":"960a1a4f5c28203a9e0885750e7e0b5fa0e596f6282da9815adde6115a52f516"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.593863 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" event={"ID":"c4f06d56-e3ce-413c-bbaf-f479d0629867","Type":"ContainerStarted","Data":"4c43c77f39317f3cc9a021304196b2816b0303cd41f0b512cf6e56c3c03a14b2"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.596112 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" event={"ID":"efdc514b-cd12-4784-951b-8c0b2878dd02","Type":"ContainerStarted","Data":"49d190629401bbddc99538d7b530c6cd5df40e2944fcfd7e7a6d7af337c02364"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.597847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" event={"ID":"64c5a0e0-9121-416a-b48c-219349cc9ba3","Type":"ContainerStarted","Data":"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.597994 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" event={"ID":"64c5a0e0-9121-416a-b48c-219349cc9ba3","Type":"ContainerStarted","Data":"ae4d71e3522ae844e4dd59663ad89c2844759a466a2170626e75342d56406d36"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.598737 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.600835 4740 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rf9jx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.600869 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.604919 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" event={"ID":"15af21ec-1c9c-46bc-b2be-8efa7628acf8","Type":"ContainerStarted","Data":"5368c2175a69931766496626c969c8bb6097bb7359180b5f8ff839823e4caf19"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.604949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" event={"ID":"15af21ec-1c9c-46bc-b2be-8efa7628acf8","Type":"ContainerStarted","Data":"45fae1f9786492ffd48e21bda4e8da3e22095b043e42e4443733a2c8d189c3cc"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.622651 4740 generic.go:334] "Generic (PLEG): container finished" podID="fef3a2ff-8e8b-4e93-80b7-bd7b0249e223" containerID="338fe92cbea7733c9fb891db1c055c05f70187c11f3170f460a4eacd62356015" exitCode=0 Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.624920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" event={"ID":"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223","Type":"ContainerDied","Data":"338fe92cbea7733c9fb891db1c055c05f70187c11f3170f460a4eacd62356015"} Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.638551 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rqtgp"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.652266 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.654237 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.152938223 +0000 UTC m=+159.790000822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.654541 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.657334 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.157323662 +0000 UTC m=+159.794386261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.753432 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.755198 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.755538 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.759823 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.259432453 +0000 UTC m=+159.896495052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.797115 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ssmd6"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.834510 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9rptf"] Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.856829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.857790 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.357274988 +0000 UTC m=+159.994337587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.958112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.958372 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.458298963 +0000 UTC m=+160.095361562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.959643 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:30 crc kubenswrapper[4740]: E0130 15:58:30.960412 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.460368654 +0000 UTC m=+160.097431253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:30 crc kubenswrapper[4740]: W0130 15:58:30.976658 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0957754_07ca_49e4_93ee_aabf49ce5578.slice/crio-be538d725c24fac6046fe12cef9a1fce8a3e45459c1c1d820038e4cac51684be WatchSource:0}: Error finding container be538d725c24fac6046fe12cef9a1fce8a3e45459c1c1d820038e4cac51684be: Status 404 returned error can't find the container with id be538d725c24fac6046fe12cef9a1fce8a3e45459c1c1d820038e4cac51684be Jan 30 15:58:30 crc kubenswrapper[4740]: W0130 15:58:30.977118 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b41ccc_dc45_4c27_8b9e_99cdddb63824.slice/crio-23298484ce9def879ae3707917abcde98e1afd6cc9e1738f49fa1616954f4d10 WatchSource:0}: Error finding container 23298484ce9def879ae3707917abcde98e1afd6cc9e1738f49fa1616954f4d10: Status 404 returned error can't find the container with id 23298484ce9def879ae3707917abcde98e1afd6cc9e1738f49fa1616954f4d10 Jan 30 15:58:30 crc kubenswrapper[4740]: I0130 15:58:30.983984 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf"] Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.062338 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.063013 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.562979238 +0000 UTC m=+160.200041837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.063482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.066124 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.565986153 +0000 UTC m=+160.203048752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.163376 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.166014 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.166608 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.666576216 +0000 UTC m=+160.303638815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.267838 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.267913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.267970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.268007 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.268036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.268731 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.768694137 +0000 UTC m=+160.405756927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.269877 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.285572 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.295138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.295788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.351674 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.367061 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.369554 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.369895 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.869872736 +0000 UTC m=+160.506935335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.398071 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" podStartSLOduration=128.398049647 podStartE2EDuration="2m8.398049647s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:31.394277063 +0000 UTC m=+160.031339662" watchObservedRunningTime="2026-01-30 15:58:31.398049647 +0000 UTC m=+160.035112246" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.432725 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sg76q" podStartSLOduration=128.432697829 podStartE2EDuration="2m8.432697829s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:31.430959756 +0000 UTC m=+160.068022375" watchObservedRunningTime="2026-01-30 15:58:31.432697829 +0000 UTC m=+160.069760428" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.470637 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.471075 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:31.971055024 +0000 UTC m=+160.608117623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.554816 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.571223 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.571625 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.071598106 +0000 UTC m=+160.708660705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.675613 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.676069 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.176050856 +0000 UTC m=+160.813113465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.683739 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" event={"ID":"ac1f6214-eb5a-4aef-81e2-4a513de6fef3","Type":"ContainerStarted","Data":"afbdce2d52fafbd4428e3cd0e4e9d84508168bea12a0d358b98900618a355410"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.685443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" event={"ID":"a0957754-07ca-49e4-93ee-aabf49ce5578","Type":"ContainerStarted","Data":"be538d725c24fac6046fe12cef9a1fce8a3e45459c1c1d820038e4cac51684be"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.688014 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" event={"ID":"be7f0e88-7c2e-4c1b-a617-9da27584b057","Type":"ContainerStarted","Data":"fbe1ddac7405ce0d6ca97b94969803d7579fdd59bc500b3f7007161270a14a90"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.731526 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" event={"ID":"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e","Type":"ContainerStarted","Data":"435adc17dbfd6c0281856bef165f0c35aca75237dd1eee4a0e5998c1640d31c6"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.732674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6x58j" event={"ID":"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf","Type":"ContainerStarted","Data":"52278d737605865c6fd9c059f4ee354712ca6469e5f7e3dcae7e7738bef87e1a"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.735956 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wdjz2" event={"ID":"da394642-26c8-4d31-8a0b-f49a357dbeda","Type":"ContainerStarted","Data":"60f4ba142633c2d9789afe66511351fa376e4aaa8f139b1cecd131545ca28ddf"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.737283 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-j2cnv" podStartSLOduration=128.737260889 podStartE2EDuration="2m8.737260889s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:31.736807498 +0000 UTC m=+160.373870097" watchObservedRunningTime="2026-01-30 15:58:31.737260889 +0000 UTC m=+160.374323488" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.738315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" event={"ID":"f3c5a7f7-b993-459b-8f88-83b6861e4bb4","Type":"ContainerStarted","Data":"9e79223d9341dbb8fcff890d5895967aa666ca8afd64190485f019b3e50d6910"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.740392 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" event={"ID":"72d84287-a513-45bd-ada2-57b6e115a754","Type":"ContainerStarted","Data":"51179870e462339e1720b667b3ab5412029d166915e05bdeb05aef044a2a127c"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.761377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" event={"ID":"69ccfcc2-8b4e-489d-8674-e41092950276","Type":"ContainerStarted","Data":"9ec8d6fb13376dfc8d37ef193525ba9cb9ed21b3c481c39921def387bc7aaeab"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.761462 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" event={"ID":"69ccfcc2-8b4e-489d-8674-e41092950276","Type":"ContainerStarted","Data":"ae65ff99c84a93cbf8ed1d1de41feb1cc52680ebd92d8d397676c012ddf34f6e"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.767497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" event={"ID":"74fcc367-6e97-4c45-83ec-d3257c125bff","Type":"ContainerStarted","Data":"ba6c1bf9407540ab762f303bfef3f208a39ba97f87da21dbdd433c7a3caeb7ed"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.768890 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" event={"ID":"cd736e46-6f9b-41ed-9503-1646948ed818","Type":"ContainerStarted","Data":"26e9b3b727494221fbd35731b7f06f4e100271dd1d62be914b5319d605f32068"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.768946 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" event={"ID":"cd736e46-6f9b-41ed-9503-1646948ed818","Type":"ContainerStarted","Data":"3b2a766b6422c5760b4f3767b38bd0f8258c7acfbdc7780c61ed5351308cf32b"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.769674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" event={"ID":"43f87d70-f314-4419-be49-f97060083a68","Type":"ContainerStarted","Data":"5ce6ef91b3baf2d434b96938ad48a845bda401ae425209761a297a105bcaceeb"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.770431 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" event={"ID":"75b41ccc-dc45-4c27-8b9e-99cdddb63824","Type":"ContainerStarted","Data":"23298484ce9def879ae3707917abcde98e1afd6cc9e1738f49fa1616954f4d10"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.777001 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.779849 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.279813608 +0000 UTC m=+160.916876207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.790737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7l62s" event={"ID":"87d90fd3-1a3e-407f-9c35-f6cfd6b01108","Type":"ContainerStarted","Data":"79d07b05a662c636049a9b2e8f5d73630a0d3272df4ac46e0dda7861a0b58918"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.812996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" event={"ID":"d63559dc-58c0-452c-9629-a4f63f4b4463","Type":"ContainerStarted","Data":"95f8b64b1eb0c56cc0d2abbe7a1ce4f73a7f4315604e9d86b44f0fecd44be86c"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.823524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pvwn4" event={"ID":"ee70f092-28be-470d-961b-0c777d465523","Type":"ContainerStarted","Data":"5e63c3486e11ee51490b375a55fdbe0e28b537a03c9bb413ebeb4525a9604836"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.844761 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podStartSLOduration=128.844734924 podStartE2EDuration="2m8.844734924s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:31.806212975 +0000 UTC m=+160.443275574" watchObservedRunningTime="2026-01-30 15:58:31.844734924 +0000 UTC m=+160.481797523" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.848408 4740 generic.go:334] "Generic (PLEG): container finished" podID="cef47ed2-b13f-4f69-ab97-3665967de31d" containerID="afb6447c774e825fc0c9182a741b36b4107cb0548e2c04247fb85aa768c7cd18" exitCode=0 Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.849986 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" event={"ID":"cef47ed2-b13f-4f69-ab97-3665967de31d","Type":"ContainerDied","Data":"afb6447c774e825fc0c9182a741b36b4107cb0548e2c04247fb85aa768c7cd18"} Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.850448 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.850483 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.863584 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.880482 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj"] Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.882773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.891235 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.39120556 +0000 UTC m=+161.028268159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.941450 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podStartSLOduration=127.94142612 podStartE2EDuration="2m7.94142612s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:31.91289039 +0000 UTC m=+160.549953009" watchObservedRunningTime="2026-01-30 15:58:31.94142612 +0000 UTC m=+160.578488709" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.977988 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sg76q" Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.987142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:31 crc kubenswrapper[4740]: E0130 15:58:31.987166 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.487132348 +0000 UTC m=+161.124194947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:31 crc kubenswrapper[4740]: I0130 15:58:31.988165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:31.997121 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.497084335 +0000 UTC m=+161.134146934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.091509 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.092442 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.592409968 +0000 UTC m=+161.229472567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.194291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.194763 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.694745565 +0000 UTC m=+161.331808164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.257114 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2crr5" podStartSLOduration=129.257072406 podStartE2EDuration="2m9.257072406s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:32.233441447 +0000 UTC m=+160.870504046" watchObservedRunningTime="2026-01-30 15:58:32.257072406 +0000 UTC m=+160.894135005" Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.296286 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.297192 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.797164114 +0000 UTC m=+161.434226713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.381312 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6"] Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.384336 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6"] Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.402299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.402892 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:32.902873604 +0000 UTC m=+161.539936203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.505280 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-97t7f" podStartSLOduration=129.505252992 podStartE2EDuration="2m9.505252992s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:32.504787201 +0000 UTC m=+161.141849800" watchObservedRunningTime="2026-01-30 15:58:32.505252992 +0000 UTC m=+161.142315591" Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.505706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.506069 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.006052352 +0000 UTC m=+161.643114951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.546924 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dl6xs" podStartSLOduration=129.546904399 podStartE2EDuration="2m9.546904399s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:32.543637868 +0000 UTC m=+161.180700467" watchObservedRunningTime="2026-01-30 15:58:32.546904399 +0000 UTC m=+161.183966998" Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.607508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.607929 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.107909717 +0000 UTC m=+161.744972316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.725546 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.726308 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.226279373 +0000 UTC m=+161.863341972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.820400 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk"] Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.835930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.836370 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.336338762 +0000 UTC m=+161.973401361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.928163 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc"] Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.933744 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm"] Jan 30 15:58:32 crc kubenswrapper[4740]: W0130 15:58:32.939660 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9db18118_120c_43d1_a71a_281f8c7a0adf.slice/crio-1382df6682547d6753a5732da63209231dac1110074c5d4e9d3bef847865fa39 WatchSource:0}: Error finding container 1382df6682547d6753a5732da63209231dac1110074c5d4e9d3bef847865fa39: Status 404 returned error can't find the container with id 1382df6682547d6753a5732da63209231dac1110074c5d4e9d3bef847865fa39 Jan 30 15:58:32 crc kubenswrapper[4740]: I0130 15:58:32.940943 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:32 crc kubenswrapper[4740]: E0130 15:58:32.941304 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.441284004 +0000 UTC m=+162.078346603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.002974 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vvtsd"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.003329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" event={"ID":"ac1f6214-eb5a-4aef-81e2-4a513de6fef3","Type":"ContainerStarted","Data":"21d472ae2c00acc07ec29c32f49bc8e3c75e65ebb94ac47fc4292c3cfcddc014"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.036554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" event={"ID":"d63559dc-58c0-452c-9629-a4f63f4b4463","Type":"ContainerStarted","Data":"f62eff1112931b4885bbe412f5a764974ffd32cb7e0efa9a2b67315b7e032c1a"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.042316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.042759 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.542740529 +0000 UTC m=+162.179803128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.053280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" event={"ID":"c4f06d56-e3ce-413c-bbaf-f479d0629867","Type":"ContainerStarted","Data":"f3244c88ecbddbb45b4a8bd92b7d88a829883ac40c5793cc877741686193c327"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.072571 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.096431 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.107448 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mscjf"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.110106 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.110798 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5mrzj" podStartSLOduration=129.110787602 podStartE2EDuration="2m9.110787602s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:33.084893878 +0000 UTC m=+161.721956477" watchObservedRunningTime="2026-01-30 15:58:33.110787602 +0000 UTC m=+161.747850201" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.126942 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" event={"ID":"efdc514b-cd12-4784-951b-8c0b2878dd02","Type":"ContainerStarted","Data":"0818b95bfaefa54a846839f26f0f078ce2a0871f91933d97c46641c81d3b2236"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.151661 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.152749 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.652710946 +0000 UTC m=+162.289773545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.152964 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.153587 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.653578057 +0000 UTC m=+162.290640656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: W0130 15:58:33.168008 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddca51efa_eb5b_45d2_a9ff_8c88e7b52ba0.slice/crio-117bda409f52a192199b2e4457da3dbeff88fda853beb61faa348a9743dff366 WatchSource:0}: Error finding container 117bda409f52a192199b2e4457da3dbeff88fda853beb61faa348a9743dff366: Status 404 returned error can't find the container with id 117bda409f52a192199b2e4457da3dbeff88fda853beb61faa348a9743dff366 Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.178693 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.180406 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" podStartSLOduration=129.180383424 podStartE2EDuration="2m9.180383424s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:33.152423119 +0000 UTC m=+161.789485718" watchObservedRunningTime="2026-01-30 15:58:33.180383424 +0000 UTC m=+161.817446013" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.200878 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.234327 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" event={"ID":"26373528-9c79-419c-a68b-8cce50827fd5","Type":"ContainerStarted","Data":"ebddc47a0ace7149b38facb99e2182e9d246b839f6846bb368c9e32a0ee6a132"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.251125 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.254328 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.254842 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.754815697 +0000 UTC m=+162.391878296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: W0130 15:58:33.261527 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd35d5638_daa3_4829_bae0_449278a71719.slice/crio-e86d467c1bf33bbb5b75b897b0a5fdc9e5248f0ffe5a54e0fb19c926203a5d83 WatchSource:0}: Error finding container e86d467c1bf33bbb5b75b897b0a5fdc9e5248f0ffe5a54e0fb19c926203a5d83: Status 404 returned error can't find the container with id e86d467c1bf33bbb5b75b897b0a5fdc9e5248f0ffe5a54e0fb19c926203a5d83 Jan 30 15:58:33 crc kubenswrapper[4740]: W0130 15:58:33.265505 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0336ee48_8f1e_49ed_a021_a01446330b39.slice/crio-496e5e873d3ffd85650e0596d27ad00d9079779ef24338053904fce6a51446cf WatchSource:0}: Error finding container 496e5e873d3ffd85650e0596d27ad00d9079779ef24338053904fce6a51446cf: Status 404 returned error can't find the container with id 496e5e873d3ffd85650e0596d27ad00d9079779ef24338053904fce6a51446cf Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.278782 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.288499 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4"] Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.307700 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" event={"ID":"771ae644-7090-4bf7-915c-142ca8c5e982","Type":"ContainerStarted","Data":"5e9e0a865ffbfb45cff8e4df690380707cdd00c10eadd6ba924307e1391eb427"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.355797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.356607 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.856590999 +0000 UTC m=+162.493653598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.443407 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.466911 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.470445 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:33.970412531 +0000 UTC m=+162.607475130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.475319 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" event={"ID":"a0957754-07ca-49e4-93ee-aabf49ce5578","Type":"ContainerStarted","Data":"4794702f77f2c978cdc591a3eedd839938ef28cd434236c8cb955fe0d0f42216"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.561149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" event={"ID":"f3c5a7f7-b993-459b-8f88-83b6861e4bb4","Type":"ContainerStarted","Data":"bc02e5d757fa0f60bb26e238f64142353f2ae3b660ca48f7f79d420b8a75ec2f"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.569476 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.570265 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.070246436 +0000 UTC m=+162.707309035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.606453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pvwn4" event={"ID":"ee70f092-28be-470d-961b-0c777d465523","Type":"ContainerStarted","Data":"7117a56772d7a2be974843dea9fbc674dc7fddb0185931997ffdf32d1a3f8fd7"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.627703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" event={"ID":"1232da99-b822-4ac9-8def-d246aacd1df6","Type":"ContainerStarted","Data":"0281a63b01b83b9205f32436d08f6d750df80a57b6d55237efbb454ce19d82c1"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.674574 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.674648 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjf5j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.674700 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.675862 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.677557 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.177527336 +0000 UTC m=+162.814589935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.736113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" event={"ID":"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223","Type":"ContainerStarted","Data":"ac55cec1a13828b88904dbbdbf1ad503523e30a20779d05dafc89b1f594d30c4"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.768386 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7l62s" event={"ID":"87d90fd3-1a3e-407f-9c35-f6cfd6b01108","Type":"ContainerStarted","Data":"d5424b1e8e3b24275b21ca21f5fdb14bc7556af43f04fde527223d93924170a6"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.782010 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.782675 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.282660873 +0000 UTC m=+162.919723472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.798400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wdjz2" event={"ID":"da394642-26c8-4d31-8a0b-f49a357dbeda","Type":"ContainerStarted","Data":"09b08eb3c019ce56a89148559a57b7880616aa259a4ad5e8abf18061d48b4db0"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.799409 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.867847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6x58j" event={"ID":"3f96c4bf-6b12-45ff-bbd7-a3c6f2ecccaf","Type":"ContainerStarted","Data":"62605373971ec691ae5f898b9caf3fc73bfecc1ab33387ff51740f3c50ec56c7"} Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.874801 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.875398 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.892970 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:33 crc kubenswrapper[4740]: E0130 15:58:33.894213 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.394184438 +0000 UTC m=+163.031247207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.909689 4740 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vc2l4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 15:58:33 crc kubenswrapper[4740]: I0130 15:58:33.909782 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" podUID="c4f06d56-e3ce-413c-bbaf-f479d0629867" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:33.998882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.001485 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.501471308 +0000 UTC m=+163.138533907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.104201 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.104637 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.604612575 +0000 UTC m=+163.241675174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.219230 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.219770 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.719752491 +0000 UTC m=+163.356815090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.323217 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.324492 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.824466557 +0000 UTC m=+163.461529156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.333751 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.341980 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:34 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:34 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:34 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.342092 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.426974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.427418 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:34.927397879 +0000 UTC m=+163.564460478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.476498 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pvwn4" podStartSLOduration=131.47647734 podStartE2EDuration="2m11.47647734s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.461370114 +0000 UTC m=+163.098432713" watchObservedRunningTime="2026-01-30 15:58:34.47647734 +0000 UTC m=+163.113539939" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.527515 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" podStartSLOduration=131.52749485 podStartE2EDuration="2m11.52749485s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.522424014 +0000 UTC m=+163.159486613" watchObservedRunningTime="2026-01-30 15:58:34.52749485 +0000 UTC m=+163.164557449" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.529596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.529683 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.029661274 +0000 UTC m=+163.666723863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.531611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.531929 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.03191656 +0000 UTC m=+163.668979149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.601141 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" podStartSLOduration=131.600687591 podStartE2EDuration="2m11.600687591s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.598820155 +0000 UTC m=+163.235882744" watchObservedRunningTime="2026-01-30 15:58:34.600687591 +0000 UTC m=+163.237750190" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.601637 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ssmd6" podStartSLOduration=130.601629325 podStartE2EDuration="2m10.601629325s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.5644805 +0000 UTC m=+163.201543099" watchObservedRunningTime="2026-01-30 15:58:34.601629325 +0000 UTC m=+163.238691924" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.637305 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.637843 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.137822516 +0000 UTC m=+163.774885115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.675085 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" podStartSLOduration=131.675065263 podStartE2EDuration="2m11.675065263s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.636475192 +0000 UTC m=+163.273537791" watchObservedRunningTime="2026-01-30 15:58:34.675065263 +0000 UTC m=+163.312127862" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.736085 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wdjz2" podStartSLOduration=7.73605438 podStartE2EDuration="7.73605438s" podCreationTimestamp="2026-01-30 15:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.675471173 +0000 UTC m=+163.312533772" watchObservedRunningTime="2026-01-30 15:58:34.73605438 +0000 UTC m=+163.373116989" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.738856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.739253 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.23923616 +0000 UTC m=+163.876298759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.783245 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hcwm5" podStartSLOduration=131.783223894 podStartE2EDuration="2m11.783223894s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.781617514 +0000 UTC m=+163.418680113" watchObservedRunningTime="2026-01-30 15:58:34.783223894 +0000 UTC m=+163.420286483" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.807174 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7l62s" podStartSLOduration=7.807130589 podStartE2EDuration="7.807130589s" podCreationTimestamp="2026-01-30 15:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.740660555 +0000 UTC m=+163.377723144" watchObservedRunningTime="2026-01-30 15:58:34.807130589 +0000 UTC m=+163.444193188" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.842979 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.843870 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.343364821 +0000 UTC m=+163.980427420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.904243 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" event={"ID":"0336ee48-8f1e-49ed-a021-a01446330b39","Type":"ContainerStarted","Data":"496e5e873d3ffd85650e0596d27ad00d9079779ef24338053904fce6a51446cf"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.914773 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cb2qj" event={"ID":"1232da99-b822-4ac9-8def-d246aacd1df6","Type":"ContainerStarted","Data":"9d5655e765f814d3d650fde566d288a48391a58c3de9755068111d497681f450"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.927453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" event={"ID":"cd736e46-6f9b-41ed-9503-1646948ed818","Type":"ContainerStarted","Data":"9745c218b3cf0c9ddd646eec4cef2395438fa422d20c8dee4d0f221a9c8e662c"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.945232 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:34 crc kubenswrapper[4740]: E0130 15:58:34.945716 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.445698878 +0000 UTC m=+164.082761477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.957209 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" event={"ID":"ac1f6214-eb5a-4aef-81e2-4a513de6fef3","Type":"ContainerStarted","Data":"2b1d610c7c3488547787ab19123e9091cbf7b1620def6f717db387a6e4e826c6"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.969988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" event={"ID":"cef47ed2-b13f-4f69-ab97-3665967de31d","Type":"ContainerStarted","Data":"2315fd04be6b555e1f8aa45078c179203afe62f513e223fb6b14dc09e5ed010a"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.988137 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" event={"ID":"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50","Type":"ContainerStarted","Data":"763f3e87fed60234eea4985e816db3179a8d81aa48eb93ce8003e74a1fa301dc"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.988200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" event={"ID":"f4a1a69b-6c27-4b5c-95ed-ea05e85bee50","Type":"ContainerStarted","Data":"d2a440a8b9214237809dc1f71b8624da984a7048f4fa70c7dbfdd3cd771ce20f"} Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.988854 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6x58j" podStartSLOduration=7.988829781 podStartE2EDuration="7.988829781s" podCreationTimestamp="2026-01-30 15:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.841954336 +0000 UTC m=+163.479016935" watchObservedRunningTime="2026-01-30 15:58:34.988829781 +0000 UTC m=+163.625892370" Jan 30 15:58:34 crc kubenswrapper[4740]: I0130 15:58:34.990117 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mqdkd" podStartSLOduration=131.990110823 podStartE2EDuration="2m11.990110823s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:34.989490098 +0000 UTC m=+163.626552697" watchObservedRunningTime="2026-01-30 15:58:34.990110823 +0000 UTC m=+163.627173422" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.018323 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wdjz2" event={"ID":"da394642-26c8-4d31-8a0b-f49a357dbeda","Type":"ContainerStarted","Data":"4e6bab1bfa4003f0936276a9353001594f21e627a7d9fa4bceb1cc857d6b9354"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.048724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" event={"ID":"b4be91ca-c1df-458b-b8da-29f713fefe22","Type":"ContainerStarted","Data":"4b76f876c40cbe9002b8b761ac6ec48d6d0794b158b9eb9f96785d0d672a8c9d"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.048782 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" event={"ID":"b4be91ca-c1df-458b-b8da-29f713fefe22","Type":"ContainerStarted","Data":"b03f6ba481ba932e7a79cee9dd1b3931b98a12a7e6866b842a0c0e22c9b084dc"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.066308 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-q42cf" podStartSLOduration=132.066283539 podStartE2EDuration="2m12.066283539s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.064724 +0000 UTC m=+163.701786609" watchObservedRunningTime="2026-01-30 15:58:35.066283539 +0000 UTC m=+163.703346148" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.070703 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.072662 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.572634427 +0000 UTC m=+164.209697026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.137934 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cnnxt" podStartSLOduration=132.137911661 podStartE2EDuration="2m12.137911661s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.100737086 +0000 UTC m=+163.737799685" watchObservedRunningTime="2026-01-30 15:58:35.137911661 +0000 UTC m=+163.774974260" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.147055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" event={"ID":"771ae644-7090-4bf7-915c-142ca8c5e982","Type":"ContainerStarted","Data":"d538277f163c69d0397ca04b86628887b74fa26f0467edbf5871d5e1479270cc"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.148598 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.175043 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.184484 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.68445675 +0000 UTC m=+164.321519349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.214424 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vvtsd" event={"ID":"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0","Type":"ContainerStarted","Data":"9447fc54666659448c2b302e795db47cc514aa218c5d4eb3777b8f4344b046c2"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.214478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vvtsd" event={"ID":"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0","Type":"ContainerStarted","Data":"117bda409f52a192199b2e4457da3dbeff88fda853beb61faa348a9743dff366"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.217212 4740 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7dgp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.217292 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" podUID="771ae644-7090-4bf7-915c-142ca8c5e982" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.227250 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.237547 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.237605 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.248065 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" event={"ID":"30b7b1ea-4fad-47ff-8278-6d1e3f256b51","Type":"ContainerStarted","Data":"a501e4406d8a5ab7cf8a3b1585e1d522f38eea6cf781d589a82f9f8ce084cf0b"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.271908 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" podStartSLOduration=132.271873305 podStartE2EDuration="2m12.271873305s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.217690697 +0000 UTC m=+163.854753296" watchObservedRunningTime="2026-01-30 15:58:35.271873305 +0000 UTC m=+163.908935904" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.275131 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" podStartSLOduration=131.275106256 podStartE2EDuration="2m11.275106256s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.270420329 +0000 UTC m=+163.907482928" watchObservedRunningTime="2026-01-30 15:58:35.275106256 +0000 UTC m=+163.912168855" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.280267 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" event={"ID":"43f87d70-f314-4419-be49-f97060083a68","Type":"ContainerStarted","Data":"dac2853dda5baac2b534353157dd3c354e576bc164969f802f55d6b763cbcebb"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.280329 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" event={"ID":"43f87d70-f314-4419-be49-f97060083a68","Type":"ContainerStarted","Data":"5c0d0eda58058883e041ed4f1b04efea2e5413dfed6291bd966c52d783a791a5"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.286626 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.288026 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.788003597 +0000 UTC m=+164.425066196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.298593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b63fa53846a3c7e74e96b3bedfe99116ce977057237dcb3fc27fb128fe855be6"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.339584 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:35 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:35 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:35 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.339674 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.356420 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vvtsd" podStartSLOduration=132.356397349 podStartE2EDuration="2m12.356397349s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.306238521 +0000 UTC m=+163.943301120" watchObservedRunningTime="2026-01-30 15:58:35.356397349 +0000 UTC m=+163.993459938" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.358992 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9rptf" podStartSLOduration=132.358984553 podStartE2EDuration="2m12.358984553s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.356007559 +0000 UTC m=+163.993070158" watchObservedRunningTime="2026-01-30 15:58:35.358984553 +0000 UTC m=+163.996047142" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.362257 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" event={"ID":"fef3a2ff-8e8b-4e93-80b7-bd7b0249e223","Type":"ContainerStarted","Data":"8cffcf0fd6f8217afba070ff17471e69c53e3068f4883b0b306a5cf391457cf7"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.362291 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" event={"ID":"9db18118-120c-43d1-a71a-281f8c7a0adf","Type":"ContainerStarted","Data":"2866bdf7744bb1cd7e3defa55095dd2c14e0df98780def4a822e46ac71f8dbab"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.362302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" event={"ID":"9db18118-120c-43d1-a71a-281f8c7a0adf","Type":"ContainerStarted","Data":"1382df6682547d6753a5732da63209231dac1110074c5d4e9d3bef847865fa39"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.387800 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" event={"ID":"d774ab00-3113-4ad1-8de8-b66fc0b31b15","Type":"ContainerStarted","Data":"aed6a3dd685048cbf178469696e7b5d9d2c5e4a5c8d54b26d7f651319fb29a72"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.387867 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" event={"ID":"d774ab00-3113-4ad1-8de8-b66fc0b31b15","Type":"ContainerStarted","Data":"b1c1702055685903fd594279324e39b4136530a8afb576299bcb2a487fb4d14a"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.389019 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.389435 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.889418151 +0000 UTC m=+164.526480750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.405420 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" event={"ID":"dce7e8a0-f532-4564-9e0b-771e10667429","Type":"ContainerStarted","Data":"ed524804c0bb5f708aa217d7ddfb1786bc03dd3cb4f80eb65d66d089e4381810"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.424688 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" event={"ID":"d35d5638-daa3-4829-bae0-449278a71719","Type":"ContainerStarted","Data":"e86d467c1bf33bbb5b75b897b0a5fdc9e5248f0ffe5a54e0fb19c926203a5d83"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.437121 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" podStartSLOduration=132.437101047 podStartE2EDuration="2m12.437101047s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.433926208 +0000 UTC m=+164.070988807" watchObservedRunningTime="2026-01-30 15:58:35.437101047 +0000 UTC m=+164.074163636" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.444605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" event={"ID":"efdc514b-cd12-4784-951b-8c0b2878dd02","Type":"ContainerStarted","Data":"d0eecc65e6a990c0ef3948d7ed84426339b28ed8b85ca961c38a3d2ab90b1d45"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.478411 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" event={"ID":"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2","Type":"ContainerStarted","Data":"c537ebaa879077baf8d487ad44c79ef80783fb4552b4e945b87692333b980b2e"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.490332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.492079 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:35.992052185 +0000 UTC m=+164.629114784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.492558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"67fdae858dde971df7670319f212c9fd368ceb88204b3b262f335a37f1f5ff8f"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.503003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"91658575571b8bc6283158e1bceae21e4feacf570b4f4d1ad4e794c52d27ec5b"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.510623 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" event={"ID":"75b41ccc-dc45-4c27-8b9e-99cdddb63824","Type":"ContainerStarted","Data":"b94d1163766e043bebcc1491bde2135778585825bf8ee9a417d08d9e77b93d94"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.525853 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4mghk" podStartSLOduration=132.525826536 podStartE2EDuration="2m12.525826536s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.500770842 +0000 UTC m=+164.137833451" watchObservedRunningTime="2026-01-30 15:58:35.525826536 +0000 UTC m=+164.162889135" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.532025 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjf5j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.532085 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.560116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" event={"ID":"74fcc367-6e97-4c45-83ec-d3257c125bff","Type":"ContainerStarted","Data":"afa8e760d30027f4d9a8736dfd3509d8852d5bc7f7f2eebde8f64ab346f0b754"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.564184 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-z48p4" podStartSLOduration=132.56416831 podStartE2EDuration="2m12.56416831s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.56097769 +0000 UTC m=+164.198040289" watchObservedRunningTime="2026-01-30 15:58:35.56416831 +0000 UTC m=+164.201230919" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.588271 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" event={"ID":"47f6c1f8-2688-44c9-928b-38c58a101de0","Type":"ContainerStarted","Data":"e6fbb43762525e582c21b24e3f79d7cdc83eff18666e50c36adfecdeab36b9e6"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.588341 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" event={"ID":"47f6c1f8-2688-44c9-928b-38c58a101de0","Type":"ContainerStarted","Data":"bf13e94ad4f6e20c1a3a28b0230cf304beb814de7ea19d16de2c6ed551a66259"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.589654 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.592281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.595068 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.095048238 +0000 UTC m=+164.732111027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.607609 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" podStartSLOduration=132.6075822 podStartE2EDuration="2m12.6075822s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.606817751 +0000 UTC m=+164.243880350" watchObservedRunningTime="2026-01-30 15:58:35.6075822 +0000 UTC m=+164.244644799" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.617597 4740 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tgvth container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.617669 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" podUID="47f6c1f8-2688-44c9-928b-38c58a101de0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.618228 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" event={"ID":"e4c39bb4-5bb2-4fb0-85b2-f4ee90cf163e","Type":"ContainerStarted","Data":"c825ae03d33891b21ab073f78f2b5e8a462ec955ae3ca1c0901d84b3f5cd9533"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.619134 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.629101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" event={"ID":"26373528-9c79-419c-a68b-8cce50827fd5","Type":"ContainerStarted","Data":"4a1943dcab3d73b5cd5729b7f20cb1216e1981aad1ae62dfd73178a648219045"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.652276 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5q9nt" event={"ID":"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9","Type":"ContainerStarted","Data":"71af78eee9c9078358eefbf291d2cf2c028b35d41088363e9bf4f4e1d4a8d074"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.668633 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" event={"ID":"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548","Type":"ContainerStarted","Data":"ebdbbaac5d751e80866b51bc510205ba33939a8066f3857a79def0cb40a91015"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.668720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" event={"ID":"d14606d1-1dc6-4ec7-a1e4-6eabc01b5548","Type":"ContainerStarted","Data":"3c006b2ab48dc6761661553bb6b1ea43ae54205afcde2ea72440ad45e79b2224"} Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.685818 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" podStartSLOduration=132.685799797 podStartE2EDuration="2m12.685799797s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.682858074 +0000 UTC m=+164.319920673" watchObservedRunningTime="2026-01-30 15:58:35.685799797 +0000 UTC m=+164.322862396" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.695844 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.697719 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.197693473 +0000 UTC m=+164.834756072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.719217 4740 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g9wqg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.719377 4740 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g9wqg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.719459 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" podUID="cef47ed2-b13f-4f69-ab97-3665967de31d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.719290 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" podUID="cef47ed2-b13f-4f69-ab97-3665967de31d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.727115 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sp4zn" podStartSLOduration=132.727094624 podStartE2EDuration="2m12.727094624s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.720953252 +0000 UTC m=+164.358015851" watchObservedRunningTime="2026-01-30 15:58:35.727094624 +0000 UTC m=+164.364157223" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.806454 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.814566 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.314548051 +0000 UTC m=+164.951610640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.822854 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" podStartSLOduration=131.822833197 podStartE2EDuration="2m11.822833197s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.787416506 +0000 UTC m=+164.424479105" watchObservedRunningTime="2026-01-30 15:58:35.822833197 +0000 UTC m=+164.459895796" Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.909177 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.909508 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.409463563 +0000 UTC m=+165.046526162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.909715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:35 crc kubenswrapper[4740]: E0130 15:58:35.910547 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.41053957 +0000 UTC m=+165.047602169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:35 crc kubenswrapper[4740]: I0130 15:58:35.988228 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5q9nt" podStartSLOduration=132.988208883 podStartE2EDuration="2m12.988208883s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:35.931904192 +0000 UTC m=+164.568966791" watchObservedRunningTime="2026-01-30 15:58:35.988208883 +0000 UTC m=+164.625271482" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.013683 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.014059 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.514034166 +0000 UTC m=+165.151096765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.115108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.115663 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.615635224 +0000 UTC m=+165.252697963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.177329 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" podStartSLOduration=133.177302559 podStartE2EDuration="2m13.177302559s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.166382627 +0000 UTC m=+164.803445236" watchObservedRunningTime="2026-01-30 15:58:36.177302559 +0000 UTC m=+164.814365158" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.178263 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" podStartSLOduration=132.178251503 podStartE2EDuration="2m12.178251503s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.033285385 +0000 UTC m=+164.670347984" watchObservedRunningTime="2026-01-30 15:58:36.178251503 +0000 UTC m=+164.815314102" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.217161 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.217665 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.717640343 +0000 UTC m=+165.354702942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.243452 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" podStartSLOduration=132.243425745 podStartE2EDuration="2m12.243425745s" podCreationTimestamp="2026-01-30 15:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.242178944 +0000 UTC m=+164.879241543" watchObservedRunningTime="2026-01-30 15:58:36.243425745 +0000 UTC m=+164.880488344" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.319062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.319636 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.819607051 +0000 UTC m=+165.456669650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.330478 4740 csr.go:261] certificate signing request csr-9vs5d is approved, waiting to be issued Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.341937 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:36 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:36 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:36 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.342034 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.344672 4740 csr.go:257] certificate signing request csr-9vs5d is issued Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.420490 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.420750 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.920701916 +0000 UTC m=+165.557764515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.420909 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.421375 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:36.921339652 +0000 UTC m=+165.558402241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.522411 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.522618 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.022577362 +0000 UTC m=+165.659639961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.522738 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.523125 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.023105755 +0000 UTC m=+165.660168354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.624500 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.624727 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.124685283 +0000 UTC m=+165.761747882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.625294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.625770 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.12576099 +0000 UTC m=+165.762823589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.688164 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bdmzk" event={"ID":"dce7e8a0-f532-4564-9e0b-771e10667429","Type":"ContainerStarted","Data":"cb15bfe095bc8146c5e38ea8f616f843b881855d8266d5688c268366d2dac398"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.689944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f8fc3da8122caea241a8be6fbd8117263608224d3aacc11781d8ee7aa4ee0259"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.690755 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.692060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g6gsm" event={"ID":"2b31422a-a539-4d2d-ba7b-0b7cffd27bf2","Type":"ContainerStarted","Data":"107b1cc6a4eb49f737fac5bc202d544f1ecaeefacb99118685f720494936a208"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.694598 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"004edf7a305f48c1438bba7c5e5b288ce72141601a72f272c2e692d1dc497871"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.697788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5q9nt" event={"ID":"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9","Type":"ContainerStarted","Data":"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.699596 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" event={"ID":"0336ee48-8f1e-49ed-a021-a01446330b39","Type":"ContainerStarted","Data":"d7f7e4780a7f54b20969ac568bf80f5f4759069aa2b4a0017452cc9491e699c0"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.702937 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" event={"ID":"30b7b1ea-4fad-47ff-8278-6d1e3f256b51","Type":"ContainerStarted","Data":"680e8190ccab74bddfa928ec977bd9a8ab78fbbf03f4bf68cdedc68ad28b9c15"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.703026 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" event={"ID":"30b7b1ea-4fad-47ff-8278-6d1e3f256b51","Type":"ContainerStarted","Data":"1a78da5504708b8323ae345f0c2bfc2ab51b26428a570b53841d4265b958c892"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.704772 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" event={"ID":"d35d5638-daa3-4829-bae0-449278a71719","Type":"ContainerStarted","Data":"4720f4ac94b0d34f0de2a27fa5795cecfca32e328ac944422cfe55dce81eaf16"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.704827 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" event={"ID":"d35d5638-daa3-4829-bae0-449278a71719","Type":"ContainerStarted","Data":"322b4bce78dbfbddacd1c44b17943f7b30f4493fa31180098a515b593eb7c207"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.706523 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d4zn6" event={"ID":"26373528-9c79-419c-a68b-8cce50827fd5","Type":"ContainerStarted","Data":"02b858bc61efed0d80ed7fe3ed568df0c8efc24372bfddd242740a8359541a5d"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.708984 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjf5j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.709031 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.710019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4f223d88cd5e84742d0ed86c8b6b10cf1a44f223b93d3bf0adb10c76940388f0"} Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.711190 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.711610 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.711693 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.726969 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.727502 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.227481071 +0000 UTC m=+165.864543670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.727815 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tgvth" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.747909 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5cbjc" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.806011 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k6bfq" podStartSLOduration=133.805987625 podStartE2EDuration="2m13.805987625s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.80577487 +0000 UTC m=+165.442837469" watchObservedRunningTime="2026-01-30 15:58:36.805987625 +0000 UTC m=+165.443050224" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.807270 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mscjf" podStartSLOduration=133.807264907 podStartE2EDuration="2m13.807264907s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.75597349 +0000 UTC m=+165.393036089" watchObservedRunningTime="2026-01-30 15:58:36.807264907 +0000 UTC m=+165.444327506" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.831740 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.843708 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.343686673 +0000 UTC m=+165.980749272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.934332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:36 crc kubenswrapper[4740]: E0130 15:58:36.935016 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.434977454 +0000 UTC m=+166.072040053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.926300 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-lpktp" podStartSLOduration=133.926276738 podStartE2EDuration="2m13.926276738s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:36.921697394 +0000 UTC m=+165.558759993" watchObservedRunningTime="2026-01-30 15:58:36.926276738 +0000 UTC m=+165.563339337" Jan 30 15:58:36 crc kubenswrapper[4740]: I0130 15:58:36.954412 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7dgp6" Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.036036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.037015 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.536997133 +0000 UTC m=+166.174059732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.142645 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.143214 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.643185116 +0000 UTC m=+166.280247715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.168834 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g9wqg" Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.248190 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.248609 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.748589199 +0000 UTC m=+166.385651798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.334631 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:37 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:37 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:37 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.334707 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.350255 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.350750 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.850722761 +0000 UTC m=+166.487785360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.351223 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 15:53:36 +0000 UTC, rotation deadline is 2026-10-28 20:18:36.274216353 +0000 UTC Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.351240 4740 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6508h19m58.922978899s for next certificate rotation Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.452207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.452698 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:37.952681999 +0000 UTC m=+166.589744598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.553648 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.553803 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.053762814 +0000 UTC m=+166.690825413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.553907 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.554284 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.054269557 +0000 UTC m=+166.691332146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.655288 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.656045 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.155999169 +0000 UTC m=+166.793061768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.757274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.757802 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.257779612 +0000 UTC m=+166.894842211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.773798 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" event={"ID":"74fcc367-6e97-4c45-83ec-d3257c125bff","Type":"ContainerStarted","Data":"ad83e7c44bdd3985cca48939c62fbf2422a6a25d1f9a44d6284cdddb60a4bfc4"} Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.773847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" event={"ID":"74fcc367-6e97-4c45-83ec-d3257c125bff","Type":"ContainerStarted","Data":"aadc54258477d00ec44de57e5b56dd927d428e02e74ee05e1b3f0b8b576ed283"} Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.858047 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.861550 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.359001391 +0000 UTC m=+166.996063990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:37 crc kubenswrapper[4740]: I0130 15:58:37.960488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:37 crc kubenswrapper[4740]: E0130 15:58:37.961027 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.46100278 +0000 UTC m=+167.098065379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.061420 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.061688 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.561639624 +0000 UTC m=+167.198702223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.061754 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.062335 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.562325692 +0000 UTC m=+167.199388301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.162823 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.162998 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.662956376 +0000 UTC m=+167.300018975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.163069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.163444 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.663428968 +0000 UTC m=+167.300491567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.214977 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.216155 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.221884 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.230704 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.264318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.264537 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.764489453 +0000 UTC m=+167.401552052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.264629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.264712 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.264822 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.264949 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhmd\" (UniqueName: \"kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.265251 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.765235331 +0000 UTC m=+167.402297930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.284220 4740 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.349183 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:38 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:38 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:38 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.349283 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.366670 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.366933 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.866893351 +0000 UTC m=+167.503955950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367020 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhmd\" (UniqueName: \"kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.367584 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.867561588 +0000 UTC m=+167.504624187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367869 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.367919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.404263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhmd\" (UniqueName: \"kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd\") pod \"community-operators-q7xmh\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.412871 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.414038 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.427336 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.430716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.468646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.468799 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.968770347 +0000 UTC m=+167.605832946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.468957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xltj\" (UniqueName: \"kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.469005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.469030 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.469099 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.469427 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:38.969419543 +0000 UTC m=+167.606482142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.534041 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.572054 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.572325 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:39.072286873 +0000 UTC m=+167.709349472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.572396 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xltj\" (UniqueName: \"kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.572446 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.572468 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.572507 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.572853 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 15:58:39.072839737 +0000 UTC m=+167.709902326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7qkld" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.573986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.574061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.592325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xltj\" (UniqueName: \"kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj\") pod \"certified-operators-vsqch\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.618641 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.621117 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.673877 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.674532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.674576 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7m54\" (UniqueName: \"kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.674632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.681667 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:58:38 crc kubenswrapper[4740]: E0130 15:58:38.681768 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 15:58:39.174745743 +0000 UTC m=+167.811808342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.701838 4740 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T15:58:38.28447914Z","Handler":null,"Name":""} Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.729632 4740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.729702 4740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.736549 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7m54\" (UniqueName: \"kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780255 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780275 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780751 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.780982 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.787626 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.787681 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.809107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7m54\" (UniqueName: \"kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54\") pod \"community-operators-h84r2\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.817409 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.817457 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.820647 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.821930 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.822878 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" event={"ID":"74fcc367-6e97-4c45-83ec-d3257c125bff","Type":"ContainerStarted","Data":"0122dc636d7419f2061a6c2f9c7328c77e2dd108f2a81a17e4790b0520fa62cf"} Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.847170 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.850616 4740 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xzvss container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]log ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]etcd ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/max-in-flight-filter ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 15:58:38 crc kubenswrapper[4740]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-startinformers ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 15:58:38 crc kubenswrapper[4740]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 15:58:38 crc kubenswrapper[4740]: livez check failed Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.850704 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" podUID="fef3a2ff-8e8b-4e93-80b7-bd7b0249e223" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.888013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.888171 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.888197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztn2\" (UniqueName: \"kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.910302 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rqtgp" podStartSLOduration=11.910276435 podStartE2EDuration="11.910276435s" podCreationTimestamp="2026-01-30 15:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:38.907312271 +0000 UTC m=+167.544374870" watchObservedRunningTime="2026-01-30 15:58:38.910276435 +0000 UTC m=+167.547339034" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.922253 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.947029 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vc2l4" Jan 30 15:58:38 crc kubenswrapper[4740]: I0130 15:58:38.987591 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:38.999921 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7qkld\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.001316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.001427 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.001461 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bztn2\" (UniqueName: \"kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.003221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.014612 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.051791 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bztn2\" (UniqueName: \"kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2\") pod \"certified-operators-tzjnc\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.102826 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.153598 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.157789 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.185889 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.258025 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 15:58:39 crc kubenswrapper[4740]: W0130 15:58:39.271542 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2316420_3b75_4623_a7ef_3ae90e376158.slice/crio-b15750428119cf412a8611851b74116c0e2acb0a08c0b8222cb4549feb73fd7a WatchSource:0}: Error finding container b15750428119cf412a8611851b74116c0e2acb0a08c0b8222cb4549feb73fd7a: Status 404 returned error can't find the container with id b15750428119cf412a8611851b74116c0e2acb0a08c0b8222cb4549feb73fd7a Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.289525 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.297786 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.298983 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.302832 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.303238 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.326043 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.337023 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:39 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:39 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:39 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.337193 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.386890 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.387636 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.419995 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.420051 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.502024 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.526082 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.526129 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.526210 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.555038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.654125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.697029 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.778133 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.832244 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerStarted","Data":"c866eeb5f7eb3b82d9a2d37b64b71ac7f8417eee3b46c6663465ccbb8784e0a9"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.847426 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerStarted","Data":"e816f2b39984414d086b3007776d632e6a16bc24d0bc46f3bc1508f9514251d0"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.882055 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4be91ca-c1df-458b-b8da-29f713fefe22" containerID="4b76f876c40cbe9002b8b761ac6ec48d6d0794b158b9eb9f96785d0d672a8c9d" exitCode=0 Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.882136 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" event={"ID":"b4be91ca-c1df-458b-b8da-29f713fefe22","Type":"ContainerDied","Data":"4b76f876c40cbe9002b8b761ac6ec48d6d0794b158b9eb9f96785d0d672a8c9d"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.888808 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerStarted","Data":"dee5b2fbdefb6a1747b843d3fd7588d60354a54425a09ef37bd1ddba5179de5b"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.890820 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" event={"ID":"a3bbedcf-9070-4b3b-a515-bfac82c6c83f","Type":"ContainerStarted","Data":"6df14f7d2800454fff58c5a180c5d90eeed4d748fe2173ffcfc7f47950a996a8"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.895011 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerStarted","Data":"ee4facab844986ba99240bc020def4dbe6be0b3fcd9c7e6fbffb3f55746e7481"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.895058 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerStarted","Data":"b15750428119cf412a8611851b74116c0e2acb0a08c0b8222cb4549feb73fd7a"} Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.896937 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 15:58:39 crc kubenswrapper[4740]: I0130 15:58:39.897933 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.102152 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.209688 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.211066 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.215682 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.224924 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 15:58:40 crc kubenswrapper[4740]: W0130 15:58:40.240106 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbf449534_ccc1_4566_94ce_33fd609a1098.slice/crio-8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f WatchSource:0}: Error finding container 8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f: Status 404 returned error can't find the container with id 8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.315859 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.315932 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.316051 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.316098 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.331682 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.335752 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:40 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:40 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:40 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.335817 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.338891 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.338986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.339067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbs6v\" (UniqueName: \"kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.379857 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.379902 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.382153 4740 patch_prober.go:28] interesting pod/console-f9d7485db-5q9nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.382226 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5q9nt" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.440694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.440754 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.440818 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbs6v\" (UniqueName: \"kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.441946 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.442724 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.470903 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbs6v\" (UniqueName: \"kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v\") pod \"redhat-marketplace-whb2w\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.549436 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.607015 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.608702 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.619307 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.746183 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmdh\" (UniqueName: \"kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.746250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.746297 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.848473 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.848538 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.848634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmdh\" (UniqueName: \"kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.851647 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.851911 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.875618 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmdh\" (UniqueName: \"kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh\") pod \"redhat-marketplace-zzxxb\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.921206 4740 generic.go:334] "Generic (PLEG): container finished" podID="b2316420-3b75-4623-a7ef-3ae90e376158" containerID="ee4facab844986ba99240bc020def4dbe6be0b3fcd9c7e6fbffb3f55746e7481" exitCode=0 Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.921327 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerDied","Data":"ee4facab844986ba99240bc020def4dbe6be0b3fcd9c7e6fbffb3f55746e7481"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.923571 4740 generic.go:334] "Generic (PLEG): container finished" podID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerID="15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87" exitCode=0 Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.923654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerDied","Data":"15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.928387 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf449534-ccc1-4566-94ce-33fd609a1098","Type":"ContainerStarted","Data":"e82a704a49dc97935342f354de162bf276111e0fc0a1f232ce1a3691a6c8727b"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.928413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf449534-ccc1-4566-94ce-33fd609a1098","Type":"ContainerStarted","Data":"8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.931851 4740 generic.go:334] "Generic (PLEG): container finished" podID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerID="0d9801c843d4a646a87298a154435b3bf737586ba54afc218b6c271b1e9fdc6a" exitCode=0 Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.931938 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerDied","Data":"0d9801c843d4a646a87298a154435b3bf737586ba54afc218b6c271b1e9fdc6a"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.933395 4740 generic.go:334] "Generic (PLEG): container finished" podID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerID="ba362b6810db1ff4599f53e37b526018c69606e90e8ab57771184e2f6eb72d1a" exitCode=0 Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.933433 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerDied","Data":"ba362b6810db1ff4599f53e37b526018c69606e90e8ab57771184e2f6eb72d1a"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.937372 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" event={"ID":"a3bbedcf-9070-4b3b-a515-bfac82c6c83f","Type":"ContainerStarted","Data":"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860"} Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.939940 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:58:40 crc kubenswrapper[4740]: I0130 15:58:40.963598 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.963575235 podStartE2EDuration="1.963575235s" podCreationTimestamp="2026-01-30 15:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:40.962253552 +0000 UTC m=+169.599316151" watchObservedRunningTime="2026-01-30 15:58:40.963575235 +0000 UTC m=+169.600637834" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.021253 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" podStartSLOduration=138.021229559 podStartE2EDuration="2m18.021229559s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:41.013766104 +0000 UTC m=+169.650828703" watchObservedRunningTime="2026-01-30 15:58:41.021229559 +0000 UTC m=+169.658292158" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.085614 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 15:58:41 crc kubenswrapper[4740]: W0130 15:58:41.098007 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06656d2a_d0cf_48c5_b4f3_7780519a8bc2.slice/crio-a1f503029cb751324fa640a8abffceccdb8a0773b7a30730003155d010376b02 WatchSource:0}: Error finding container a1f503029cb751324fa640a8abffceccdb8a0773b7a30730003155d010376b02: Status 404 returned error can't find the container with id a1f503029cb751324fa640a8abffceccdb8a0773b7a30730003155d010376b02 Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.303697 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.337267 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:41 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:41 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:41 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.337440 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.354866 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.356450 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume\") pod \"b4be91ca-c1df-458b-b8da-29f713fefe22\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.356504 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzn2\" (UniqueName: \"kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2\") pod \"b4be91ca-c1df-458b-b8da-29f713fefe22\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.356622 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume\") pod \"b4be91ca-c1df-458b-b8da-29f713fefe22\" (UID: \"b4be91ca-c1df-458b-b8da-29f713fefe22\") " Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.358018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume" (OuterVolumeSpecName: "config-volume") pod "b4be91ca-c1df-458b-b8da-29f713fefe22" (UID: "b4be91ca-c1df-458b-b8da-29f713fefe22"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:58:41 crc kubenswrapper[4740]: W0130 15:58:41.360329 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c97f38_4949_48b9_957c_8e8e704d3bae.slice/crio-f85c509e17d2b96eb859965094e936f16995fd11f2b172195ab1b90b5dd4c17e WatchSource:0}: Error finding container f85c509e17d2b96eb859965094e936f16995fd11f2b172195ab1b90b5dd4c17e: Status 404 returned error can't find the container with id f85c509e17d2b96eb859965094e936f16995fd11f2b172195ab1b90b5dd4c17e Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.363656 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b4be91ca-c1df-458b-b8da-29f713fefe22" (UID: "b4be91ca-c1df-458b-b8da-29f713fefe22"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.364096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2" (OuterVolumeSpecName: "kube-api-access-rjzn2") pod "b4be91ca-c1df-458b-b8da-29f713fefe22" (UID: "b4be91ca-c1df-458b-b8da-29f713fefe22"). InnerVolumeSpecName "kube-api-access-rjzn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.412118 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 15:58:41 crc kubenswrapper[4740]: E0130 15:58:41.412442 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be91ca-c1df-458b-b8da-29f713fefe22" containerName="collect-profiles" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.412460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be91ca-c1df-458b-b8da-29f713fefe22" containerName="collect-profiles" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.412614 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4be91ca-c1df-458b-b8da-29f713fefe22" containerName="collect-profiles" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.413651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.416855 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.430322 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459215 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2wbr\" (UniqueName: \"kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459406 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459502 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b4be91ca-c1df-458b-b8da-29f713fefe22-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459517 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzn2\" (UniqueName: \"kubernetes.io/projected/b4be91ca-c1df-458b-b8da-29f713fefe22-kube-api-access-rjzn2\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.459526 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4be91ca-c1df-458b-b8da-29f713fefe22-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.561645 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.561746 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2wbr\" (UniqueName: \"kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.561829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.562256 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.562323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.600871 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2wbr\" (UniqueName: \"kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr\") pod \"redhat-operators-kws9w\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.790782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.806231 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.807390 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.856732 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.867698 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.867766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxvm\" (UniqueName: \"kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.867845 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.969295 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.969903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.969951 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.969967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxvm\" (UniqueName: \"kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:41 crc kubenswrapper[4740]: I0130 15:58:41.970543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.016991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxvm\" (UniqueName: \"kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm\") pod \"redhat-operators-vll9v\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.019154 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.019110 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t" event={"ID":"b4be91ca-c1df-458b-b8da-29f713fefe22","Type":"ContainerDied","Data":"b03f6ba481ba932e7a79cee9dd1b3931b98a12a7e6866b842a0c0e22c9b084dc"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.019467 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03f6ba481ba932e7a79cee9dd1b3931b98a12a7e6866b842a0c0e22c9b084dc" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.033652 4740 generic.go:334] "Generic (PLEG): container finished" podID="bf449534-ccc1-4566-94ce-33fd609a1098" containerID="e82a704a49dc97935342f354de162bf276111e0fc0a1f232ce1a3691a6c8727b" exitCode=0 Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.033881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf449534-ccc1-4566-94ce-33fd609a1098","Type":"ContainerDied","Data":"e82a704a49dc97935342f354de162bf276111e0fc0a1f232ce1a3691a6c8727b"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.067206 4740 generic.go:334] "Generic (PLEG): container finished" podID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerID="214d2ec89097e11dbbb0ca0a3cffb43a3458904655e771eb4e2c1b48f9ec35bf" exitCode=0 Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.067999 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerDied","Data":"214d2ec89097e11dbbb0ca0a3cffb43a3458904655e771eb4e2c1b48f9ec35bf"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.068030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerStarted","Data":"f85c509e17d2b96eb859965094e936f16995fd11f2b172195ab1b90b5dd4c17e"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.083877 4740 generic.go:334] "Generic (PLEG): container finished" podID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerID="417bb3f0791a11888623f76af5ca39ca4f4609234a5c31e18f23c889410b5ec6" exitCode=0 Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.086613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerDied","Data":"417bb3f0791a11888623f76af5ca39ca4f4609234a5c31e18f23c889410b5ec6"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.086813 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.086862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerStarted","Data":"a1f503029cb751324fa640a8abffceccdb8a0773b7a30730003155d010376b02"} Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.188778 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.339392 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:42 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:42 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:42 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.339480 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.370073 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 15:58:42 crc kubenswrapper[4740]: W0130 15:58:42.416745 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod880ca711_7365_46af_b0dc_c0500d79f658.slice/crio-6d136224d7c6d7b967088763ca4b59e6040a803858fcc27f1342d0906c9e3a20 WatchSource:0}: Error finding container 6d136224d7c6d7b967088763ca4b59e6040a803858fcc27f1342d0906c9e3a20: Status 404 returned error can't find the container with id 6d136224d7c6d7b967088763ca4b59e6040a803858fcc27f1342d0906c9e3a20 Jan 30 15:58:42 crc kubenswrapper[4740]: I0130 15:58:42.787099 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.166873 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerStarted","Data":"9b7286e81db3f2b6f2ae2f047f6c39127e2ff414ef06e23f582a905697de2a52"} Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.176317 4740 generic.go:334] "Generic (PLEG): container finished" podID="880ca711-7365-46af-b0dc-c0500d79f658" containerID="36472cd3c126c9be87d73da22690e783b1fa5ffe8e930e8008294bd63b447be2" exitCode=0 Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.176627 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerDied","Data":"36472cd3c126c9be87d73da22690e783b1fa5ffe8e930e8008294bd63b447be2"} Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.176706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerStarted","Data":"6d136224d7c6d7b967088763ca4b59e6040a803858fcc27f1342d0906c9e3a20"} Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.333846 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:43 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:43 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:43 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.333919 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.694068 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.711824 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir\") pod \"bf449534-ccc1-4566-94ce-33fd609a1098\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.711887 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access\") pod \"bf449534-ccc1-4566-94ce-33fd609a1098\" (UID: \"bf449534-ccc1-4566-94ce-33fd609a1098\") " Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.712739 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf449534-ccc1-4566-94ce-33fd609a1098" (UID: "bf449534-ccc1-4566-94ce-33fd609a1098"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.720603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf449534-ccc1-4566-94ce-33fd609a1098" (UID: "bf449534-ccc1-4566-94ce-33fd609a1098"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.812950 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf449534-ccc1-4566-94ce-33fd609a1098-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.812995 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf449534-ccc1-4566-94ce-33fd609a1098-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.832244 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:43 crc kubenswrapper[4740]: I0130 15:58:43.837642 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xzvss" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.193076 4740 generic.go:334] "Generic (PLEG): container finished" podID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerID="0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2" exitCode=0 Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.194330 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerDied","Data":"0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2"} Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.206889 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.206952 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf449534-ccc1-4566-94ce-33fd609a1098","Type":"ContainerDied","Data":"8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f"} Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.206991 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7404271bd5cca20581a7088d0e099c02e34ebbd160f6bd00238b496c8b169f" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.334114 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:44 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:44 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:44 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.334202 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.413698 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 15:58:44 crc kubenswrapper[4740]: E0130 15:58:44.413964 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf449534-ccc1-4566-94ce-33fd609a1098" containerName="pruner" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.413979 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf449534-ccc1-4566-94ce-33fd609a1098" containerName="pruner" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.414096 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf449534-ccc1-4566-94ce-33fd609a1098" containerName="pruner" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.414653 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.426208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.426511 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.441470 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.530611 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.530847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.632498 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.632592 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.632596 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.660652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.752570 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:44 crc kubenswrapper[4740]: I0130 15:58:44.970294 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wdjz2" Jan 30 15:58:45 crc kubenswrapper[4740]: I0130 15:58:45.253038 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 15:58:45 crc kubenswrapper[4740]: I0130 15:58:45.334868 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:45 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:45 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:45 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:45 crc kubenswrapper[4740]: I0130 15:58:45.334996 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:45 crc kubenswrapper[4740]: I0130 15:58:45.969828 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:45 crc kubenswrapper[4740]: I0130 15:58:45.995499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f93a9ce-6677-48e3-9476-c37aa40b6347-metrics-certs\") pod \"network-metrics-daemon-krvcv\" (UID: \"7f93a9ce-6677-48e3-9476-c37aa40b6347\") " pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:46 crc kubenswrapper[4740]: I0130 15:58:46.271188 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-krvcv" Jan 30 15:58:46 crc kubenswrapper[4740]: I0130 15:58:46.311752 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5cbf5120-b807-4abf-b9b3-1729a00720da","Type":"ContainerStarted","Data":"9191733d44061852cd55b8fc4ae6c62db20f1c4575b1438e7e9eb8f65aa9d491"} Jan 30 15:58:46 crc kubenswrapper[4740]: I0130 15:58:46.335463 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:46 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:46 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:46 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:46 crc kubenswrapper[4740]: I0130 15:58:46.335556 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:46 crc kubenswrapper[4740]: I0130 15:58:46.824044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-krvcv"] Jan 30 15:58:47 crc kubenswrapper[4740]: I0130 15:58:47.336592 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:47 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:47 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:47 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:47 crc kubenswrapper[4740]: I0130 15:58:47.336686 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:47 crc kubenswrapper[4740]: I0130 15:58:47.359957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-krvcv" event={"ID":"7f93a9ce-6677-48e3-9476-c37aa40b6347","Type":"ContainerStarted","Data":"1026b77c1a09b6049427119c395f44f135fd9662d475533e499ba91f4a7c9a1d"} Jan 30 15:58:47 crc kubenswrapper[4740]: I0130 15:58:47.360084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5cbf5120-b807-4abf-b9b3-1729a00720da","Type":"ContainerStarted","Data":"6aba3f26520d4e2be92841f0203eb6170c47d94707b855e6dbe155ce7a06f7a8"} Jan 30 15:58:47 crc kubenswrapper[4740]: I0130 15:58:47.368427 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.36836723 podStartE2EDuration="3.36836723s" podCreationTimestamp="2026-01-30 15:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:47.36674895 +0000 UTC m=+176.003811549" watchObservedRunningTime="2026-01-30 15:58:47.36836723 +0000 UTC m=+176.005429839" Jan 30 15:58:48 crc kubenswrapper[4740]: I0130 15:58:48.335559 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:48 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:48 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:48 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:48 crc kubenswrapper[4740]: I0130 15:58:48.335881 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:48 crc kubenswrapper[4740]: I0130 15:58:48.374590 4740 generic.go:334] "Generic (PLEG): container finished" podID="5cbf5120-b807-4abf-b9b3-1729a00720da" containerID="6aba3f26520d4e2be92841f0203eb6170c47d94707b855e6dbe155ce7a06f7a8" exitCode=0 Jan 30 15:58:48 crc kubenswrapper[4740]: I0130 15:58:48.374698 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5cbf5120-b807-4abf-b9b3-1729a00720da","Type":"ContainerDied","Data":"6aba3f26520d4e2be92841f0203eb6170c47d94707b855e6dbe155ce7a06f7a8"} Jan 30 15:58:48 crc kubenswrapper[4740]: I0130 15:58:48.383559 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-krvcv" event={"ID":"7f93a9ce-6677-48e3-9476-c37aa40b6347","Type":"ContainerStarted","Data":"75b38d541702539e84387e7f724eab03c3157d91a6062d802dea64807cfab093"} Jan 30 15:58:49 crc kubenswrapper[4740]: I0130 15:58:49.340306 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:49 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:49 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:49 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:49 crc kubenswrapper[4740]: I0130 15:58:49.341425 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.315614 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.316332 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.315614 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.316486 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.333780 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:50 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:50 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:50 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.333845 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.378644 4740 patch_prober.go:28] interesting pod/console-f9d7485db-5q9nt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 15:58:50 crc kubenswrapper[4740]: I0130 15:58:50.378723 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5q9nt" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 15:58:51 crc kubenswrapper[4740]: I0130 15:58:51.333548 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:51 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:51 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:51 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:51 crc kubenswrapper[4740]: I0130 15:58:51.333644 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:52 crc kubenswrapper[4740]: I0130 15:58:52.335513 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:52 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:52 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:52 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:52 crc kubenswrapper[4740]: I0130 15:58:52.335614 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:53 crc kubenswrapper[4740]: I0130 15:58:53.334754 4740 patch_prober.go:28] interesting pod/router-default-5444994796-pvwn4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 15:58:53 crc kubenswrapper[4740]: [-]has-synced failed: reason withheld Jan 30 15:58:53 crc kubenswrapper[4740]: [+]process-running ok Jan 30 15:58:53 crc kubenswrapper[4740]: healthz check failed Jan 30 15:58:53 crc kubenswrapper[4740]: I0130 15:58:53.334832 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pvwn4" podUID="ee70f092-28be-470d-961b-0c777d465523" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 15:58:54 crc kubenswrapper[4740]: I0130 15:58:54.352110 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:54 crc kubenswrapper[4740]: I0130 15:58:54.356093 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pvwn4" Jan 30 15:58:54 crc kubenswrapper[4740]: I0130 15:58:54.471095 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:58:54 crc kubenswrapper[4740]: I0130 15:58:54.471152 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.685735 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.748262 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir\") pod \"5cbf5120-b807-4abf-b9b3-1729a00720da\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.748337 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access\") pod \"5cbf5120-b807-4abf-b9b3-1729a00720da\" (UID: \"5cbf5120-b807-4abf-b9b3-1729a00720da\") " Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.748436 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5cbf5120-b807-4abf-b9b3-1729a00720da" (UID: "5cbf5120-b807-4abf-b9b3-1729a00720da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.748607 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbf5120-b807-4abf-b9b3-1729a00720da-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.754249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5cbf5120-b807-4abf-b9b3-1729a00720da" (UID: "5cbf5120-b807-4abf-b9b3-1729a00720da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:58:57 crc kubenswrapper[4740]: I0130 15:58:57.849946 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cbf5120-b807-4abf-b9b3-1729a00720da-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.537610 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.537605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5cbf5120-b807-4abf-b9b3-1729a00720da","Type":"ContainerDied","Data":"9191733d44061852cd55b8fc4ae6c62db20f1c4575b1438e7e9eb8f65aa9d491"} Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.537744 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9191733d44061852cd55b8fc4ae6c62db20f1c4575b1438e7e9eb8f65aa9d491" Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.543413 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-krvcv" event={"ID":"7f93a9ce-6677-48e3-9476-c37aa40b6347","Type":"ContainerStarted","Data":"2c55125149d827ee20311349cab79b64d769d9f22d11f5e81b6daeb9c7281c01"} Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.759093 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.759464 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" containerID="cri-o://3376d873a7cd5e6892cc30d9929763db9ef0a7c2c511a465a043315c74c60e58" gracePeriod=30 Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.784410 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:58:58 crc kubenswrapper[4740]: I0130 15:58:58.784675 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" containerID="cri-o://27d9624993a8b61f97e83d7bda8bd817ffb7a5baf6ad146ff679531541272f76" gracePeriod=30 Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.149310 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.149598 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.296126 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.550857 4740 generic.go:334] "Generic (PLEG): container finished" podID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerID="3376d873a7cd5e6892cc30d9929763db9ef0a7c2c511a465a043315c74c60e58" exitCode=0 Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.550977 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" event={"ID":"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf","Type":"ContainerDied","Data":"3376d873a7cd5e6892cc30d9929763db9ef0a7c2c511a465a043315c74c60e58"} Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.553009 4740 generic.go:334] "Generic (PLEG): container finished" podID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerID="27d9624993a8b61f97e83d7bda8bd817ffb7a5baf6ad146ff679531541272f76" exitCode=0 Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.553114 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" event={"ID":"1430672f-603b-4f60-bb2a-e95cd48a56c2","Type":"ContainerDied","Data":"27d9624993a8b61f97e83d7bda8bd817ffb7a5baf6ad146ff679531541272f76"} Jan 30 15:58:59 crc kubenswrapper[4740]: I0130 15:58:59.568098 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-krvcv" podStartSLOduration=156.568074731 podStartE2EDuration="2m36.568074731s" podCreationTimestamp="2026-01-30 15:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:58:59.566334608 +0000 UTC m=+188.203397217" watchObservedRunningTime="2026-01-30 15:58:59.568074731 +0000 UTC m=+188.205137330" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.316199 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.316284 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.316199 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.316343 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.316372 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.317024 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.317127 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.317132 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9447fc54666659448c2b302e795db47cc514aa218c5d4eb3777b8f4344b046c2"} pod="openshift-console/downloads-7954f5f757-vvtsd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.317229 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" containerID="cri-o://9447fc54666659448c2b302e795db47cc514aa218c5d4eb3777b8f4344b046c2" gracePeriod=2 Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.393949 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:59:00 crc kubenswrapper[4740]: I0130 15:59:00.397922 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 15:59:01 crc kubenswrapper[4740]: I0130 15:59:01.565755 4740 generic.go:334] "Generic (PLEG): container finished" podID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerID="9447fc54666659448c2b302e795db47cc514aa218c5d4eb3777b8f4344b046c2" exitCode=0 Jan 30 15:59:01 crc kubenswrapper[4740]: I0130 15:59:01.565822 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vvtsd" event={"ID":"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0","Type":"ContainerDied","Data":"9447fc54666659448c2b302e795db47cc514aa218c5d4eb3777b8f4344b046c2"} Jan 30 15:59:09 crc kubenswrapper[4740]: I0130 15:59:09.659782 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6nlr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:09 crc kubenswrapper[4740]: I0130 15:59:09.660409 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:09 crc kubenswrapper[4740]: I0130 15:59:09.828921 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-npf9j" Jan 30 15:59:10 crc kubenswrapper[4740]: I0130 15:59:10.148892 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:10 crc kubenswrapper[4740]: I0130 15:59:10.149093 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:10 crc kubenswrapper[4740]: I0130 15:59:10.315787 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:10 crc kubenswrapper[4740]: I0130 15:59:10.315882 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.034244 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.034516 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7m54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h84r2_openshift-marketplace(acfebdef-152a-4173-9dc4-685dcf2f0a80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.046101 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h84r2" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.335472 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.335661 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbhmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q7xmh_openshift-marketplace(b2316420-3b75-4623-a7ef-3ae90e376158): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:11 crc kubenswrapper[4740]: E0130 15:59:11.336833 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q7xmh" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" Jan 30 15:59:11 crc kubenswrapper[4740]: I0130 15:59:11.597307 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 15:59:16 crc kubenswrapper[4740]: E0130 15:59:16.834474 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 15:59:16 crc kubenswrapper[4740]: E0130 15:59:16.835065 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbs6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-whb2w_openshift-marketplace(06656d2a-d0cf-48c5-b4f3-7780519a8bc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:16 crc kubenswrapper[4740]: E0130 15:59:16.836403 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-whb2w" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.604380 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 15:59:18 crc kubenswrapper[4740]: E0130 15:59:18.604927 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cbf5120-b807-4abf-b9b3-1729a00720da" containerName="pruner" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.604959 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cbf5120-b807-4abf-b9b3-1729a00720da" containerName="pruner" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.605293 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cbf5120-b807-4abf-b9b3-1729a00720da" containerName="pruner" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.606219 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.609137 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.609250 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.620076 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.664764 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.664849 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.766480 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.766816 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.766926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: E0130 15:59:18.785752 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q7xmh" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" Jan 30 15:59:18 crc kubenswrapper[4740]: E0130 15:59:18.786136 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h84r2" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" Jan 30 15:59:18 crc kubenswrapper[4740]: E0130 15:59:18.786230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-whb2w" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.807742 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:18 crc kubenswrapper[4740]: I0130 15:59:18.938891 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:19 crc kubenswrapper[4740]: I0130 15:59:19.660906 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6nlr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:19 crc kubenswrapper[4740]: I0130 15:59:19.660999 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:19 crc kubenswrapper[4740]: E0130 15:59:19.750224 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 15:59:19 crc kubenswrapper[4740]: E0130 15:59:19.750799 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2wbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kws9w_openshift-marketplace(880ca711-7365-46af-b0dc-c0500d79f658): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:19 crc kubenswrapper[4740]: E0130 15:59:19.752735 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kws9w" podUID="880ca711-7365-46af-b0dc-c0500d79f658" Jan 30 15:59:20 crc kubenswrapper[4740]: I0130 15:59:20.149746 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:20 crc kubenswrapper[4740]: I0130 15:59:20.149877 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:20 crc kubenswrapper[4740]: I0130 15:59:20.315960 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:20 crc kubenswrapper[4740]: I0130 15:59:20.316053 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.801980 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.803441 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.822131 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.832246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.832321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.832444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.933086 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.933749 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.933804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.934200 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.934472 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:22 crc kubenswrapper[4740]: I0130 15:59:22.967303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access\") pod \"installer-9-crc\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:23 crc kubenswrapper[4740]: I0130 15:59:23.162210 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 15:59:24 crc kubenswrapper[4740]: I0130 15:59:24.455274 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:59:24 crc kubenswrapper[4740]: I0130 15:59:24.455378 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:59:29 crc kubenswrapper[4740]: I0130 15:59:29.661383 4740 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6nlr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:29 crc kubenswrapper[4740]: I0130 15:59:29.661923 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:30 crc kubenswrapper[4740]: I0130 15:59:30.148903 4740 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fjvms container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 15:59:30 crc kubenswrapper[4740]: I0130 15:59:30.149006 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 15:59:30 crc kubenswrapper[4740]: I0130 15:59:30.318038 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:30 crc kubenswrapper[4740]: I0130 15:59:30.318194 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:36 crc kubenswrapper[4740]: E0130 15:59:36.983862 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 15:59:36 crc kubenswrapper[4740]: E0130 15:59:36.984312 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxvm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vll9v_openshift-marketplace(78371818-b6fe-4aff-aa1a-95d25333ccb6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:36 crc kubenswrapper[4740]: E0130 15:59:36.985817 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vll9v" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.055710 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.055905 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bztn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tzjnc_openshift-marketplace(346cd514-9fca-47d2-9c9e-3bfe5872e936): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.057299 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tzjnc" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.070248 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.075863 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.108074 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.108417 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.108439 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.108467 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.108477 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.108613 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" containerName="controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.108628 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" containerName="route-controller-manager" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.109207 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.131000 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.133274 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.133605 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xltj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vsqch_openshift-marketplace(d3a1319e-f522-47f8-91ad-71235f9e9f45): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.135407 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vsqch" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262109 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config\") pod \"1430672f-603b-4f60-bb2a-e95cd48a56c2\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262511 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert\") pod \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262536 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpvvf\" (UniqueName: \"kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf\") pod \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262579 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca\") pod \"1430672f-603b-4f60-bb2a-e95cd48a56c2\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca\") pod \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll6xb\" (UniqueName: \"kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb\") pod \"1430672f-603b-4f60-bb2a-e95cd48a56c2\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262669 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config\") pod \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262737 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles\") pod \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\" (UID: \"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.262765 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert\") pod \"1430672f-603b-4f60-bb2a-e95cd48a56c2\" (UID: \"1430672f-603b-4f60-bb2a-e95cd48a56c2\") " Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.263062 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.263119 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.263152 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.263205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhk6\" (UniqueName: \"kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.263231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.264186 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config" (OuterVolumeSpecName: "config") pod "1430672f-603b-4f60-bb2a-e95cd48a56c2" (UID: "1430672f-603b-4f60-bb2a-e95cd48a56c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.264336 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca" (OuterVolumeSpecName: "client-ca") pod "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" (UID: "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.264736 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "1430672f-603b-4f60-bb2a-e95cd48a56c2" (UID: "1430672f-603b-4f60-bb2a-e95cd48a56c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.265443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config" (OuterVolumeSpecName: "config") pod "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" (UID: "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.266952 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" (UID: "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.283909 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.300254 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" (UID: "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.303010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1430672f-603b-4f60-bb2a-e95cd48a56c2" (UID: "1430672f-603b-4f60-bb2a-e95cd48a56c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.303522 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb" (OuterVolumeSpecName: "kube-api-access-ll6xb") pod "1430672f-603b-4f60-bb2a-e95cd48a56c2" (UID: "1430672f-603b-4f60-bb2a-e95cd48a56c2"). InnerVolumeSpecName "kube-api-access-ll6xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.304423 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf" (OuterVolumeSpecName: "kube-api-access-cpvvf") pod "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" (UID: "78eb2bf3-1af4-4efd-8ce0-733dded1dcaf"). InnerVolumeSpecName "kube-api-access-cpvvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.353756 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365166 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhk6\" (UniqueName: \"kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365281 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365339 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365454 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365472 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1430672f-603b-4f60-bb2a-e95cd48a56c2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365485 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-config\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365495 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365511 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpvvf\" (UniqueName: \"kubernetes.io/projected/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-kube-api-access-cpvvf\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365522 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1430672f-603b-4f60-bb2a-e95cd48a56c2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365532 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.365542 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll6xb\" (UniqueName: \"kubernetes.io/projected/1430672f-603b-4f60-bb2a-e95cd48a56c2-kube-api-access-ll6xb\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.366955 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.368973 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.372220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.378390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.386702 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhk6\" (UniqueName: \"kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6\") pod \"controller-manager-98cc8f9d-5h7qg\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.460150 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.743631 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 15:59:37 crc kubenswrapper[4740]: W0130 15:59:37.750681 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6762529_2483_438a_be25_61b823fc41f1.slice/crio-4280938db1670d126f5cb9d6e36d828b0f2d755ca5910da8c21d7fa042e141c1 WatchSource:0}: Error finding container 4280938db1670d126f5cb9d6e36d828b0f2d755ca5910da8c21d7fa042e141c1: Status 404 returned error can't find the container with id 4280938db1670d126f5cb9d6e36d828b0f2d755ca5910da8c21d7fa042e141c1 Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.822690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fc03a8b0-cb97-4525-a9dd-7add8cff9e25","Type":"ContainerStarted","Data":"1b29283ed35e5bd175fe9eb367a0f4e2a2d6ce13263b8c7a4bfb0de47f7c9840"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.823044 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fc03a8b0-cb97-4525-a9dd-7add8cff9e25","Type":"ContainerStarted","Data":"0969dc076ab86587a59c1d7d5c9b5ad5bca12630ae5a3f6e89bc3f589e50ce73"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.826963 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"256ad354-d723-4e72-bfcb-7ea85487109a","Type":"ContainerStarted","Data":"bd35219e93840b338ab71997b40489cab27eff89fc27609d39018188cce61e19"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.829523 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" event={"ID":"1430672f-603b-4f60-bb2a-e95cd48a56c2","Type":"ContainerDied","Data":"9c45326abd891c8b0f2db045df144b90a78edf013f0941c5322d96be684b335e"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.829566 4740 scope.go:117] "RemoveContainer" containerID="27d9624993a8b61f97e83d7bda8bd817ffb7a5baf6ad146ff679531541272f76" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.829603 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.831505 4740 generic.go:334] "Generic (PLEG): container finished" podID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerID="c4d19bd41ee799600c526a6d976a41348f3b3ad5ffdd4373d4b8fb87c4841634" exitCode=0 Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.831591 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerDied","Data":"c4d19bd41ee799600c526a6d976a41348f3b3ad5ffdd4373d4b8fb87c4841634"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.840956 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" event={"ID":"78eb2bf3-1af4-4efd-8ce0-733dded1dcaf","Type":"ContainerDied","Data":"07f8e2d8393a9a77917e2604ab7da96b8f13c64c1a3822e398f3e3cf8a237187"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.841459 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fjvms" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.849889 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=19.849320174 podStartE2EDuration="19.849320174s" podCreationTimestamp="2026-01-30 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:59:37.841745581 +0000 UTC m=+226.478808180" watchObservedRunningTime="2026-01-30 15:59:37.849320174 +0000 UTC m=+226.486382773" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.850109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vvtsd" event={"ID":"dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0","Type":"ContainerStarted","Data":"9daf6f69bf51484a94f01428627d778652c564cc9f4473b9f40bc4f5e502e3b9"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.851230 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.852249 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.852598 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.854392 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" event={"ID":"e6762529-2483-438a-be25-61b823fc41f1","Type":"ContainerStarted","Data":"4280938db1670d126f5cb9d6e36d828b0f2d755ca5910da8c21d7fa042e141c1"} Jan 30 15:59:37 crc kubenswrapper[4740]: I0130 15:59:37.863747 4740 scope.go:117] "RemoveContainer" containerID="3376d873a7cd5e6892cc30d9929763db9ef0a7c2c511a465a043315c74c60e58" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.864449 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vsqch" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.864468 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tzjnc" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" Jan 30 15:59:37 crc kubenswrapper[4740]: E0130 15:59:37.870718 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vll9v" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.048744 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.050847 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fjvms"] Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.065852 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.067712 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6nlr"] Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.867172 4740 generic.go:334] "Generic (PLEG): container finished" podID="b2316420-3b75-4623-a7ef-3ae90e376158" containerID="d9e2ebd7cbcb46f06ccc2b23d41a9ed5c14b6501b6283e8ad4e0ddd37afc58b8" exitCode=0 Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.867308 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerDied","Data":"d9e2ebd7cbcb46f06ccc2b23d41a9ed5c14b6501b6283e8ad4e0ddd37afc58b8"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.876923 4740 generic.go:334] "Generic (PLEG): container finished" podID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerID="ef5cf7eb226c8bbd0992fae82f4805069d76714cf8810e08207f75b1476f1a94" exitCode=0 Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.877050 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerDied","Data":"ef5cf7eb226c8bbd0992fae82f4805069d76714cf8810e08207f75b1476f1a94"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.884988 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerStarted","Data":"2bfdb7198a77d525724530864177d82fb37549eee99200a6bd439a266af8dbf1"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.890571 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"256ad354-d723-4e72-bfcb-7ea85487109a","Type":"ContainerStarted","Data":"003fc5e57d1d97692503cab5b86690f5d585654cd98bb8c90b2d74c0171d530d"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.897446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerStarted","Data":"3633739a102c124ad9958a2af9d356ae27734c3d7f28439a2f269713fec94fc5"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.899972 4740 generic.go:334] "Generic (PLEG): container finished" podID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerID="b2daf4354a19a9c27739e455c41d0e2d3af9176cadc357027d03bee9818ac465" exitCode=0 Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.900033 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerDied","Data":"b2daf4354a19a9c27739e455c41d0e2d3af9176cadc357027d03bee9818ac465"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.903525 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" event={"ID":"e6762529-2483-438a-be25-61b823fc41f1","Type":"ContainerStarted","Data":"5f9033091bf8de3885892d4431e68269499fc1b1d63c740c7c159ed78c13e85e"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.904707 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.911114 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.912770 4740 generic.go:334] "Generic (PLEG): container finished" podID="fc03a8b0-cb97-4525-a9dd-7add8cff9e25" containerID="1b29283ed35e5bd175fe9eb367a0f4e2a2d6ce13263b8c7a4bfb0de47f7c9840" exitCode=0 Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.914002 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fc03a8b0-cb97-4525-a9dd-7add8cff9e25","Type":"ContainerDied","Data":"1b29283ed35e5bd175fe9eb367a0f4e2a2d6ce13263b8c7a4bfb0de47f7c9840"} Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.914136 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.914210 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.926776 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zzxxb" podStartSLOduration=2.595494897 podStartE2EDuration="58.92675144s" podCreationTimestamp="2026-01-30 15:58:40 +0000 UTC" firstStartedPulling="2026-01-30 15:58:42.070944344 +0000 UTC m=+170.708006943" lastFinishedPulling="2026-01-30 15:59:38.402200887 +0000 UTC m=+227.039263486" observedRunningTime="2026-01-30 15:59:38.922545663 +0000 UTC m=+227.559608262" watchObservedRunningTime="2026-01-30 15:59:38.92675144 +0000 UTC m=+227.563814039" Jan 30 15:59:38 crc kubenswrapper[4740]: I0130 15:59:38.967216 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=16.967184137 podStartE2EDuration="16.967184137s" podCreationTimestamp="2026-01-30 15:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:59:38.940829178 +0000 UTC m=+227.577891797" watchObservedRunningTime="2026-01-30 15:59:38.967184137 +0000 UTC m=+227.604246736" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.041848 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" podStartSLOduration=21.041826313 podStartE2EDuration="21.041826313s" podCreationTimestamp="2026-01-30 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:59:39.037703009 +0000 UTC m=+227.674765608" watchObservedRunningTime="2026-01-30 15:59:39.041826313 +0000 UTC m=+227.678888912" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.354158 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1430672f-603b-4f60-bb2a-e95cd48a56c2" path="/var/lib/kubelet/pods/1430672f-603b-4f60-bb2a-e95cd48a56c2/volumes" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.355505 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78eb2bf3-1af4-4efd-8ce0-733dded1dcaf" path="/var/lib/kubelet/pods/78eb2bf3-1af4-4efd-8ce0-733dded1dcaf/volumes" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.781498 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.782362 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.795529 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.795827 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.796007 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.796185 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.798109 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.798402 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.814545 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.913671 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rxfj\" (UniqueName: \"kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.913790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.913824 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.913842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.923571 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerStarted","Data":"6de0df4d6f0f983c61d81549eff61be4118e00230db28ddf3c86f62be6c1b32a"} Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.926241 4740 generic.go:334] "Generic (PLEG): container finished" podID="880ca711-7365-46af-b0dc-c0500d79f658" containerID="2bfdb7198a77d525724530864177d82fb37549eee99200a6bd439a266af8dbf1" exitCode=0 Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.926295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerDied","Data":"2bfdb7198a77d525724530864177d82fb37549eee99200a6bd439a266af8dbf1"} Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.930202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerStarted","Data":"be2aeded559d301bd9d43a9e42ff29f3ea88c2f167b97f009b370771c8ae4c7a"} Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.932733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerStarted","Data":"943d7b9001e349eb51829d30dc2d34e6b6457a90e92d05f1ab4a2554cf46738b"} Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.933341 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.933413 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.949586 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h84r2" podStartSLOduration=3.555591067 podStartE2EDuration="1m1.94954142s" podCreationTimestamp="2026-01-30 15:58:38 +0000 UTC" firstStartedPulling="2026-01-30 15:58:40.935148147 +0000 UTC m=+169.572210756" lastFinishedPulling="2026-01-30 15:59:39.32909851 +0000 UTC m=+227.966161109" observedRunningTime="2026-01-30 15:59:39.946718428 +0000 UTC m=+228.583781027" watchObservedRunningTime="2026-01-30 15:59:39.94954142 +0000 UTC m=+228.586604019" Jan 30 15:59:39 crc kubenswrapper[4740]: I0130 15:59:39.980881 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7xmh" podStartSLOduration=2.48040982 podStartE2EDuration="1m1.980852695s" podCreationTimestamp="2026-01-30 15:58:38 +0000 UTC" firstStartedPulling="2026-01-30 15:58:39.896639062 +0000 UTC m=+168.533701661" lastFinishedPulling="2026-01-30 15:59:39.397081937 +0000 UTC m=+228.034144536" observedRunningTime="2026-01-30 15:59:39.976952006 +0000 UTC m=+228.614014605" watchObservedRunningTime="2026-01-30 15:59:39.980852695 +0000 UTC m=+228.617915294" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.015866 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.015929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.015948 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.015997 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rxfj\" (UniqueName: \"kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.017495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.019621 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.025002 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.033775 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whb2w" podStartSLOduration=2.756975042 podStartE2EDuration="1m0.033753429s" podCreationTimestamp="2026-01-30 15:58:40 +0000 UTC" firstStartedPulling="2026-01-30 15:58:42.087370103 +0000 UTC m=+170.724432702" lastFinishedPulling="2026-01-30 15:59:39.36414849 +0000 UTC m=+228.001211089" observedRunningTime="2026-01-30 15:59:40.029223634 +0000 UTC m=+228.666286233" watchObservedRunningTime="2026-01-30 15:59:40.033753429 +0000 UTC m=+228.670816028" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.038932 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rxfj\" (UniqueName: \"kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj\") pod \"route-controller-manager-864f7bcc8f-7kjmn\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.120466 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.315723 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.315749 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.315802 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.315819 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.335125 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.421601 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access\") pod \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.423471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir\") pod \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\" (UID: \"fc03a8b0-cb97-4525-a9dd-7add8cff9e25\") " Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.423564 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc03a8b0-cb97-4525-a9dd-7add8cff9e25" (UID: "fc03a8b0-cb97-4525-a9dd-7add8cff9e25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.424042 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.426947 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc03a8b0-cb97-4525-a9dd-7add8cff9e25" (UID: "fc03a8b0-cb97-4525-a9dd-7add8cff9e25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.525427 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc03a8b0-cb97-4525-a9dd-7add8cff9e25-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.549629 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.549689 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.763468 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.940103 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.940472 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.940985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" event={"ID":"7cd9a390-47d5-45d2-af3e-35b9d551249b","Type":"ContainerStarted","Data":"e423051169724b09d9dad9dc0022611a49643580e0132499d4d3b7591febc265"} Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.942684 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fc03a8b0-cb97-4525-a9dd-7add8cff9e25","Type":"ContainerDied","Data":"0969dc076ab86587a59c1d7d5c9b5ad5bca12630ae5a3f6e89bc3f589e50ce73"} Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.942711 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0969dc076ab86587a59c1d7d5c9b5ad5bca12630ae5a3f6e89bc3f589e50ce73" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.942733 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.945750 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerStarted","Data":"033cce39bc84e5606258e3c1e7bf92cb937ad8874ba44f615ad7454b218ef7d4"} Jan 30 15:59:40 crc kubenswrapper[4740]: I0130 15:59:40.968569 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kws9w" podStartSLOduration=2.740041645 podStartE2EDuration="59.968541722s" podCreationTimestamp="2026-01-30 15:58:41 +0000 UTC" firstStartedPulling="2026-01-30 15:58:43.187738607 +0000 UTC m=+171.824801206" lastFinishedPulling="2026-01-30 15:59:40.416238684 +0000 UTC m=+229.053301283" observedRunningTime="2026-01-30 15:59:40.966580142 +0000 UTC m=+229.603642761" watchObservedRunningTime="2026-01-30 15:59:40.968541722 +0000 UTC m=+229.605604321" Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.791392 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.791722 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.802923 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-whb2w" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="registry-server" probeResult="failure" output=< Jan 30 15:59:41 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 15:59:41 crc kubenswrapper[4740]: > Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.952477 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" event={"ID":"7cd9a390-47d5-45d2-af3e-35b9d551249b","Type":"ContainerStarted","Data":"27d8d81708bf5f810429a8cf02d767efb87b0c360cd270fce761bd5b596e94cb"} Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.952974 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.973017 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" podStartSLOduration=23.972988744 podStartE2EDuration="23.972988744s" podCreationTimestamp="2026-01-30 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 15:59:41.972757868 +0000 UTC m=+230.609820477" watchObservedRunningTime="2026-01-30 15:59:41.972988744 +0000 UTC m=+230.610051333" Jan 30 15:59:41 crc kubenswrapper[4740]: I0130 15:59:41.995766 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zzxxb" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="registry-server" probeResult="failure" output=< Jan 30 15:59:41 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 15:59:41 crc kubenswrapper[4740]: > Jan 30 15:59:42 crc kubenswrapper[4740]: I0130 15:59:42.021812 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 15:59:42 crc kubenswrapper[4740]: I0130 15:59:42.844920 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kws9w" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="registry-server" probeResult="failure" output=< Jan 30 15:59:42 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 15:59:42 crc kubenswrapper[4740]: > Jan 30 15:59:48 crc kubenswrapper[4740]: I0130 15:59:48.534937 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:59:48 crc kubenswrapper[4740]: I0130 15:59:48.535033 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:59:48 crc kubenswrapper[4740]: I0130 15:59:48.668917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:59:48 crc kubenswrapper[4740]: I0130 15:59:48.988674 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:48 crc kubenswrapper[4740]: I0130 15:59:48.989513 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:49 crc kubenswrapper[4740]: I0130 15:59:49.071618 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:49 crc kubenswrapper[4740]: I0130 15:59:49.116711 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.059162 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.316178 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.316337 4740 patch_prober.go:28] interesting pod/downloads-7954f5f757-vvtsd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.316984 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.316923 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vvtsd" podUID="dca51efa-eb5b-45d2-a9ff-8c88e7b52ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.609979 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:59:50 crc kubenswrapper[4740]: I0130 15:59:50.663306 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 15:59:51 crc kubenswrapper[4740]: I0130 15:59:51.004006 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:51 crc kubenswrapper[4740]: I0130 15:59:51.051457 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:51 crc kubenswrapper[4740]: I0130 15:59:51.697545 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:59:51 crc kubenswrapper[4740]: I0130 15:59:51.870978 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:59:51 crc kubenswrapper[4740]: I0130 15:59:51.928880 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 15:59:52 crc kubenswrapper[4740]: I0130 15:59:52.024690 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h84r2" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="registry-server" containerID="cri-o://6de0df4d6f0f983c61d81549eff61be4118e00230db28ddf3c86f62be6c1b32a" gracePeriod=2 Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.041781 4740 generic.go:334] "Generic (PLEG): container finished" podID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerID="6de0df4d6f0f983c61d81549eff61be4118e00230db28ddf3c86f62be6c1b32a" exitCode=0 Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.041890 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerDied","Data":"6de0df4d6f0f983c61d81549eff61be4118e00230db28ddf3c86f62be6c1b32a"} Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.261256 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.273520 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities\") pod \"acfebdef-152a-4173-9dc4-685dcf2f0a80\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.273651 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content\") pod \"acfebdef-152a-4173-9dc4-685dcf2f0a80\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.273702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7m54\" (UniqueName: \"kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54\") pod \"acfebdef-152a-4173-9dc4-685dcf2f0a80\" (UID: \"acfebdef-152a-4173-9dc4-685dcf2f0a80\") " Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.282762 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities" (OuterVolumeSpecName: "utilities") pod "acfebdef-152a-4173-9dc4-685dcf2f0a80" (UID: "acfebdef-152a-4173-9dc4-685dcf2f0a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.304300 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54" (OuterVolumeSpecName: "kube-api-access-x7m54") pod "acfebdef-152a-4173-9dc4-685dcf2f0a80" (UID: "acfebdef-152a-4173-9dc4-685dcf2f0a80"). InnerVolumeSpecName "kube-api-access-x7m54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.377158 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:53 crc kubenswrapper[4740]: I0130 15:59:53.377756 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7m54\" (UniqueName: \"kubernetes.io/projected/acfebdef-152a-4173-9dc4-685dcf2f0a80-kube-api-access-x7m54\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.053156 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h84r2" event={"ID":"acfebdef-152a-4173-9dc4-685dcf2f0a80","Type":"ContainerDied","Data":"dee5b2fbdefb6a1747b843d3fd7588d60354a54425a09ef37bd1ddba5179de5b"} Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.053920 4740 scope.go:117] "RemoveContainer" containerID="6de0df4d6f0f983c61d81549eff61be4118e00230db28ddf3c86f62be6c1b32a" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.053619 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h84r2" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.455170 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.455281 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.455438 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.456135 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.456198 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24" gracePeriod=600 Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.692375 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:59:54 crc kubenswrapper[4740]: I0130 15:59:54.692839 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zzxxb" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="registry-server" containerID="cri-o://3633739a102c124ad9958a2af9d356ae27734c3d7f28439a2f269713fec94fc5" gracePeriod=2 Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.063574 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24" exitCode=0 Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.063653 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24"} Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.233223 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acfebdef-152a-4173-9dc4-685dcf2f0a80" (UID: "acfebdef-152a-4173-9dc4-685dcf2f0a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.298428 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.307168 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h84r2"] Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.310193 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acfebdef-152a-4173-9dc4-685dcf2f0a80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:55 crc kubenswrapper[4740]: I0130 15:59:55.348520 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" path="/var/lib/kubelet/pods/acfebdef-152a-4173-9dc4-685dcf2f0a80/volumes" Jan 30 15:59:56 crc kubenswrapper[4740]: I0130 15:59:56.075496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerDied","Data":"3633739a102c124ad9958a2af9d356ae27734c3d7f28439a2f269713fec94fc5"} Jan 30 15:59:56 crc kubenswrapper[4740]: I0130 15:59:56.075444 4740 generic.go:334] "Generic (PLEG): container finished" podID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerID="3633739a102c124ad9958a2af9d356ae27734c3d7f28439a2f269713fec94fc5" exitCode=0 Jan 30 15:59:56 crc kubenswrapper[4740]: I0130 15:59:56.129343 4740 scope.go:117] "RemoveContainer" containerID="ef5cf7eb226c8bbd0992fae82f4805069d76714cf8810e08207f75b1476f1a94" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.278404 4740 scope.go:117] "RemoveContainer" containerID="ba362b6810db1ff4599f53e37b526018c69606e90e8ab57771184e2f6eb72d1a" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.313390 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.362540 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmdh\" (UniqueName: \"kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh\") pod \"c8c97f38-4949-48b9-957c-8e8e704d3bae\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.362629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content\") pod \"c8c97f38-4949-48b9-957c-8e8e704d3bae\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.362916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities\") pod \"c8c97f38-4949-48b9-957c-8e8e704d3bae\" (UID: \"c8c97f38-4949-48b9-957c-8e8e704d3bae\") " Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.366626 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities" (OuterVolumeSpecName: "utilities") pod "c8c97f38-4949-48b9-957c-8e8e704d3bae" (UID: "c8c97f38-4949-48b9-957c-8e8e704d3bae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.371177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh" (OuterVolumeSpecName: "kube-api-access-fvmdh") pod "c8c97f38-4949-48b9-957c-8e8e704d3bae" (UID: "c8c97f38-4949-48b9-957c-8e8e704d3bae"). InnerVolumeSpecName "kube-api-access-fvmdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.405105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8c97f38-4949-48b9-957c-8e8e704d3bae" (UID: "c8c97f38-4949-48b9-957c-8e8e704d3bae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.466067 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.466114 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmdh\" (UniqueName: \"kubernetes.io/projected/c8c97f38-4949-48b9-957c-8e8e704d3bae-kube-api-access-fvmdh\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:58 crc kubenswrapper[4740]: I0130 15:59:58.466131 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c97f38-4949-48b9-957c-8e8e704d3bae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.115072 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerStarted","Data":"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e"} Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.117953 4740 generic.go:334] "Generic (PLEG): container finished" podID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerID="7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968" exitCode=0 Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.118127 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerDied","Data":"7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968"} Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.123172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzxxb" event={"ID":"c8c97f38-4949-48b9-957c-8e8e704d3bae","Type":"ContainerDied","Data":"f85c509e17d2b96eb859965094e936f16995fd11f2b172195ab1b90b5dd4c17e"} Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.123258 4740 scope.go:117] "RemoveContainer" containerID="3633739a102c124ad9958a2af9d356ae27734c3d7f28439a2f269713fec94fc5" Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.123671 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzxxb" Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.144870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66"} Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.177582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerStarted","Data":"046926360fae9327bb15aee50e34c0897098f225a3a2c4d68a115ea244d92835"} Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.182296 4740 scope.go:117] "RemoveContainer" containerID="c4d19bd41ee799600c526a6d976a41348f3b3ad5ffdd4373d4b8fb87c4841634" Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.209792 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.216216 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzxxb"] Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.223047 4740 scope.go:117] "RemoveContainer" containerID="214d2ec89097e11dbbb0ca0a3cffb43a3458904655e771eb4e2c1b48f9ec35bf" Jan 30 15:59:59 crc kubenswrapper[4740]: I0130 15:59:59.344382 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" path="/var/lib/kubelet/pods/c8c97f38-4949-48b9-957c-8e8e704d3bae/volumes" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.156956 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz"] Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.158458 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="extract-content" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.158578 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="extract-content" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.158722 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.158824 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.158919 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.158994 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.159071 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="extract-utilities" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.159145 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="extract-utilities" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.159233 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="extract-utilities" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.159321 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="extract-utilities" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.159423 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc03a8b0-cb97-4525-a9dd-7add8cff9e25" containerName="pruner" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.159510 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc03a8b0-cb97-4525-a9dd-7add8cff9e25" containerName="pruner" Jan 30 16:00:00 crc kubenswrapper[4740]: E0130 16:00:00.159591 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="extract-content" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.159674 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="extract-content" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.159902 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c97f38-4949-48b9-957c-8e8e704d3bae" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.160042 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="acfebdef-152a-4173-9dc4-685dcf2f0a80" containerName="registry-server" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.160126 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc03a8b0-cb97-4525-a9dd-7add8cff9e25" containerName="pruner" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.160701 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.163239 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.164212 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.171562 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz"] Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.193917 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerStarted","Data":"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028"} Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.197879 4740 generic.go:334] "Generic (PLEG): container finished" podID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerID="dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e" exitCode=0 Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.198071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerDied","Data":"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e"} Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.205246 4740 generic.go:334] "Generic (PLEG): container finished" podID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerID="046926360fae9327bb15aee50e34c0897098f225a3a2c4d68a115ea244d92835" exitCode=0 Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.205947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerDied","Data":"046926360fae9327bb15aee50e34c0897098f225a3a2c4d68a115ea244d92835"} Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.224141 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzjnc" podStartSLOduration=3.513175043 podStartE2EDuration="1m22.224121187s" podCreationTimestamp="2026-01-30 15:58:38 +0000 UTC" firstStartedPulling="2026-01-30 15:58:40.92723227 +0000 UTC m=+169.564294869" lastFinishedPulling="2026-01-30 15:59:59.638178414 +0000 UTC m=+248.275241013" observedRunningTime="2026-01-30 16:00:00.220380562 +0000 UTC m=+248.857443161" watchObservedRunningTime="2026-01-30 16:00:00.224121187 +0000 UTC m=+248.861183786" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.298692 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.299195 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.299483 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfwp\" (UniqueName: \"kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.338893 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vvtsd" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.401434 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.401499 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.401544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfwp\" (UniqueName: \"kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.403495 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.424782 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfwp\" (UniqueName: \"kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.429503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume\") pod \"collect-profiles-29496480-t5mrz\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:00 crc kubenswrapper[4740]: I0130 16:00:00.543841 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:01 crc kubenswrapper[4740]: I0130 16:00:01.043247 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz"] Jan 30 16:00:01 crc kubenswrapper[4740]: I0130 16:00:01.211573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" event={"ID":"83b18894-dbba-45c6-ab14-330dbc6e0521","Type":"ContainerStarted","Data":"3b5efcfec99ebc8c74cc47dbe83fb1acb9ccb56f428032eb799f77680c91ea32"} Jan 30 16:00:02 crc kubenswrapper[4740]: I0130 16:00:02.224553 4740 generic.go:334] "Generic (PLEG): container finished" podID="83b18894-dbba-45c6-ab14-330dbc6e0521" containerID="7445366211bbb02f9a6e67e116b47e2b97096d3d84250fca04c12ca1ee2e1906" exitCode=0 Jan 30 16:00:02 crc kubenswrapper[4740]: I0130 16:00:02.224618 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" event={"ID":"83b18894-dbba-45c6-ab14-330dbc6e0521","Type":"ContainerDied","Data":"7445366211bbb02f9a6e67e116b47e2b97096d3d84250fca04c12ca1ee2e1906"} Jan 30 16:00:02 crc kubenswrapper[4740]: I0130 16:00:02.231933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerStarted","Data":"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670"} Jan 30 16:00:02 crc kubenswrapper[4740]: I0130 16:00:02.261252 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vll9v" podStartSLOduration=3.44972734 podStartE2EDuration="1m21.261233459s" podCreationTimestamp="2026-01-30 15:58:41 +0000 UTC" firstStartedPulling="2026-01-30 15:58:44.20127858 +0000 UTC m=+172.838341179" lastFinishedPulling="2026-01-30 16:00:02.012784699 +0000 UTC m=+250.649847298" observedRunningTime="2026-01-30 16:00:02.258419598 +0000 UTC m=+250.895482207" watchObservedRunningTime="2026-01-30 16:00:02.261233459 +0000 UTC m=+250.898296058" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.239855 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerStarted","Data":"0bab91095e39f89ad615d56fa28b0b4638904622fc95c92d71b4c33da580a93f"} Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.264328 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vsqch" podStartSLOduration=4.170541916 podStartE2EDuration="1m25.264302696s" podCreationTimestamp="2026-01-30 15:58:38 +0000 UTC" firstStartedPulling="2026-01-30 15:58:40.933764953 +0000 UTC m=+169.570827552" lastFinishedPulling="2026-01-30 16:00:02.027525723 +0000 UTC m=+250.664588332" observedRunningTime="2026-01-30 16:00:03.262568672 +0000 UTC m=+251.899631281" watchObservedRunningTime="2026-01-30 16:00:03.264302696 +0000 UTC m=+251.901365295" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.662789 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.863670 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume\") pod \"83b18894-dbba-45c6-ab14-330dbc6e0521\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.863824 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume\") pod \"83b18894-dbba-45c6-ab14-330dbc6e0521\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.863859 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfwp\" (UniqueName: \"kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp\") pod \"83b18894-dbba-45c6-ab14-330dbc6e0521\" (UID: \"83b18894-dbba-45c6-ab14-330dbc6e0521\") " Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.866015 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume" (OuterVolumeSpecName: "config-volume") pod "83b18894-dbba-45c6-ab14-330dbc6e0521" (UID: "83b18894-dbba-45c6-ab14-330dbc6e0521"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.873700 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp" (OuterVolumeSpecName: "kube-api-access-hlfwp") pod "83b18894-dbba-45c6-ab14-330dbc6e0521" (UID: "83b18894-dbba-45c6-ab14-330dbc6e0521"). InnerVolumeSpecName "kube-api-access-hlfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.873731 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83b18894-dbba-45c6-ab14-330dbc6e0521" (UID: "83b18894-dbba-45c6-ab14-330dbc6e0521"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.964830 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b18894-dbba-45c6-ab14-330dbc6e0521-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.964893 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfwp\" (UniqueName: \"kubernetes.io/projected/83b18894-dbba-45c6-ab14-330dbc6e0521-kube-api-access-hlfwp\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:03 crc kubenswrapper[4740]: I0130 16:00:03.964914 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b18894-dbba-45c6-ab14-330dbc6e0521-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:04 crc kubenswrapper[4740]: I0130 16:00:04.250168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" event={"ID":"83b18894-dbba-45c6-ab14-330dbc6e0521","Type":"ContainerDied","Data":"3b5efcfec99ebc8c74cc47dbe83fb1acb9ccb56f428032eb799f77680c91ea32"} Jan 30 16:00:04 crc kubenswrapper[4740]: I0130 16:00:04.250220 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5efcfec99ebc8c74cc47dbe83fb1acb9ccb56f428032eb799f77680c91ea32" Jan 30 16:00:04 crc kubenswrapper[4740]: I0130 16:00:04.250309 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz" Jan 30 16:00:08 crc kubenswrapper[4740]: I0130 16:00:08.737428 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:00:08 crc kubenswrapper[4740]: I0130 16:00:08.738322 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:00:08 crc kubenswrapper[4740]: I0130 16:00:08.785521 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.159738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.160425 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.209836 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.324447 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.330286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:00:09 crc kubenswrapper[4740]: I0130 16:00:09.598299 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rf9jx"] Jan 30 16:00:10 crc kubenswrapper[4740]: I0130 16:00:10.497090 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 16:00:12 crc kubenswrapper[4740]: I0130 16:00:12.190482 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:00:12 crc kubenswrapper[4740]: I0130 16:00:12.190985 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:00:12 crc kubenswrapper[4740]: I0130 16:00:12.235819 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:00:12 crc kubenswrapper[4740]: I0130 16:00:12.304005 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzjnc" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="registry-server" containerID="cri-o://0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028" gracePeriod=2 Jan 30 16:00:12 crc kubenswrapper[4740]: I0130 16:00:12.345522 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.287420 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.313605 4740 generic.go:334] "Generic (PLEG): container finished" podID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerID="0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028" exitCode=0 Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.314592 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnc" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.315147 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerDied","Data":"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028"} Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.315192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnc" event={"ID":"346cd514-9fca-47d2-9c9e-3bfe5872e936","Type":"ContainerDied","Data":"c866eeb5f7eb3b82d9a2d37b64b71ac7f8417eee3b46c6663465ccbb8784e0a9"} Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.315219 4740 scope.go:117] "RemoveContainer" containerID="0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.340164 4740 scope.go:117] "RemoveContainer" containerID="7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.365509 4740 scope.go:117] "RemoveContainer" containerID="15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.384436 4740 scope.go:117] "RemoveContainer" containerID="0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028" Jan 30 16:00:13 crc kubenswrapper[4740]: E0130 16:00:13.385608 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028\": container with ID starting with 0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028 not found: ID does not exist" containerID="0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.385647 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028"} err="failed to get container status \"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028\": rpc error: code = NotFound desc = could not find container \"0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028\": container with ID starting with 0a8cde34d0ddd3c262cdf95c40db8e2fe5c8a6979e87bbc1c19a5127743b2028 not found: ID does not exist" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.385675 4740 scope.go:117] "RemoveContainer" containerID="7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968" Jan 30 16:00:13 crc kubenswrapper[4740]: E0130 16:00:13.386201 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968\": container with ID starting with 7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968 not found: ID does not exist" containerID="7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.386222 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968"} err="failed to get container status \"7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968\": rpc error: code = NotFound desc = could not find container \"7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968\": container with ID starting with 7862e637ee7f010e7e0b7036f9ba8defc6cd10acff5d190557df06e188010968 not found: ID does not exist" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.386237 4740 scope.go:117] "RemoveContainer" containerID="15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87" Jan 30 16:00:13 crc kubenswrapper[4740]: E0130 16:00:13.386729 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87\": container with ID starting with 15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87 not found: ID does not exist" containerID="15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.386783 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87"} err="failed to get container status \"15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87\": rpc error: code = NotFound desc = could not find container \"15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87\": container with ID starting with 15ead62a96e65830663beeb8e79aab090023cea751dece3d72c890f65b705c87 not found: ID does not exist" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.401960 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities\") pod \"346cd514-9fca-47d2-9c9e-3bfe5872e936\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.402086 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content\") pod \"346cd514-9fca-47d2-9c9e-3bfe5872e936\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.402512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bztn2\" (UniqueName: \"kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2\") pod \"346cd514-9fca-47d2-9c9e-3bfe5872e936\" (UID: \"346cd514-9fca-47d2-9c9e-3bfe5872e936\") " Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.402993 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities" (OuterVolumeSpecName: "utilities") pod "346cd514-9fca-47d2-9c9e-3bfe5872e936" (UID: "346cd514-9fca-47d2-9c9e-3bfe5872e936"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.403127 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.411010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2" (OuterVolumeSpecName: "kube-api-access-bztn2") pod "346cd514-9fca-47d2-9c9e-3bfe5872e936" (UID: "346cd514-9fca-47d2-9c9e-3bfe5872e936"). InnerVolumeSpecName "kube-api-access-bztn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.448800 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "346cd514-9fca-47d2-9c9e-3bfe5872e936" (UID: "346cd514-9fca-47d2-9c9e-3bfe5872e936"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.504391 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bztn2\" (UniqueName: \"kubernetes.io/projected/346cd514-9fca-47d2-9c9e-3bfe5872e936-kube-api-access-bztn2\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.504435 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/346cd514-9fca-47d2-9c9e-3bfe5872e936-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.641707 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 16:00:13 crc kubenswrapper[4740]: I0130 16:00:13.649403 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzjnc"] Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.344724 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" path="/var/lib/kubelet/pods/346cd514-9fca-47d2-9c9e-3bfe5872e936/volumes" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608409 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.608668 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="registry-server" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608681 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="registry-server" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.608690 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b18894-dbba-45c6-ab14-330dbc6e0521" containerName="collect-profiles" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608696 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b18894-dbba-45c6-ab14-330dbc6e0521" containerName="collect-profiles" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.608714 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="extract-content" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608721 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="extract-content" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.608737 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="extract-utilities" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608743 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="extract-utilities" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608849 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b18894-dbba-45c6-ab14-330dbc6e0521" containerName="collect-profiles" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.608861 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="346cd514-9fca-47d2-9c9e-3bfe5872e936" containerName="registry-server" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609228 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609459 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609572 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7" gracePeriod=15 Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609618 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258" gracePeriod=15 Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609688 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca" gracePeriod=15 Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609714 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a" gracePeriod=15 Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.609730 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b" gracePeriod=15 Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611014 4740 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611253 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611269 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611284 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611292 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611303 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611309 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611319 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611325 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611336 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611342 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611368 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611373 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611386 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611392 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.611400 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611406 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611524 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611536 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611544 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611552 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611561 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611571 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.611772 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638407 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638467 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638692 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638784 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638806 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.638829 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.656592 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.740793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.740903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.740949 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.740987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741027 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741063 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741083 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741274 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741390 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741420 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.741476 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: I0130 16:00:15.952858 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:00:15 crc kubenswrapper[4740]: E0130 16:00:15.983649 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8d8d2815a34c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,LastTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.338877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541"} Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.339378 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6f8f285ad063ebed4aeedf54bb0f03fd9204bcb53692bc2fcea6c9271514d102"} Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.339928 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.340631 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.342340 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.343902 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.344738 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258" exitCode=0 Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.344809 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b" exitCode=0 Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.344818 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a" exitCode=0 Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.344826 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca" exitCode=2 Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.344878 4740 scope.go:117] "RemoveContainer" containerID="612f07d9b25cd2e9bab790e94111a495522f3390444cb59f08078342825f20c4" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.349795 4740 generic.go:334] "Generic (PLEG): container finished" podID="256ad354-d723-4e72-bfcb-7ea85487109a" containerID="003fc5e57d1d97692503cab5b86690f5d585654cd98bb8c90b2d74c0171d530d" exitCode=0 Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.349841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"256ad354-d723-4e72-bfcb-7ea85487109a","Type":"ContainerDied","Data":"003fc5e57d1d97692503cab5b86690f5d585654cd98bb8c90b2d74c0171d530d"} Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.350788 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.351542 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:16 crc kubenswrapper[4740]: I0130 16:00:16.352340 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:16 crc kubenswrapper[4740]: E0130 16:00:16.750053 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8d8d2815a34c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,LastTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.359934 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.615346 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.617407 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.617898 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.674903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock\") pod \"256ad354-d723-4e72-bfcb-7ea85487109a\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access\") pod \"256ad354-d723-4e72-bfcb-7ea85487109a\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675061 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock" (OuterVolumeSpecName: "var-lock") pod "256ad354-d723-4e72-bfcb-7ea85487109a" (UID: "256ad354-d723-4e72-bfcb-7ea85487109a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675303 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir\") pod \"256ad354-d723-4e72-bfcb-7ea85487109a\" (UID: \"256ad354-d723-4e72-bfcb-7ea85487109a\") " Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675495 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "256ad354-d723-4e72-bfcb-7ea85487109a" (UID: "256ad354-d723-4e72-bfcb-7ea85487109a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675896 4740 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.675936 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/256ad354-d723-4e72-bfcb-7ea85487109a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.690192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "256ad354-d723-4e72-bfcb-7ea85487109a" (UID: "256ad354-d723-4e72-bfcb-7ea85487109a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.777791 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/256ad354-d723-4e72-bfcb-7ea85487109a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.876056 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.876456 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.877147 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.877706 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.878009 4740 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:17 crc kubenswrapper[4740]: I0130 16:00:17.878046 4740 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 16:00:17 crc kubenswrapper[4740]: E0130 16:00:17.878258 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="200ms" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.007935 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.008739 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.009456 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.010047 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.011057 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.079461 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="400ms" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.080827 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.080949 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.081471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.081537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.082019 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.082076 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.082977 4740 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.083218 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.083436 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.371212 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.372250 4740 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7" exitCode=0 Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.372387 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.372397 4740 scope.go:117] "RemoveContainer" containerID="155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.374939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"256ad354-d723-4e72-bfcb-7ea85487109a","Type":"ContainerDied","Data":"bd35219e93840b338ab71997b40489cab27eff89fc27609d39018188cce61e19"} Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.374976 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd35219e93840b338ab71997b40489cab27eff89fc27609d39018188cce61e19" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.375034 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.393188 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.394692 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.395377 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.395968 4740 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.396641 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.396993 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.403000 4740 scope.go:117] "RemoveContainer" containerID="9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.417651 4740 scope.go:117] "RemoveContainer" containerID="163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.431609 4740 scope.go:117] "RemoveContainer" containerID="b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.453283 4740 scope.go:117] "RemoveContainer" containerID="bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.479204 4740 scope.go:117] "RemoveContainer" containerID="d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.480249 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="800ms" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.507125 4740 scope.go:117] "RemoveContainer" containerID="155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.507636 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\": container with ID starting with 155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258 not found: ID does not exist" containerID="155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.507680 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258"} err="failed to get container status \"155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\": rpc error: code = NotFound desc = could not find container \"155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258\": container with ID starting with 155b49a1611f0f40ec53c32f93550aa8fb80d1e9001908e7dfd2e0868bef8258 not found: ID does not exist" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.507714 4740 scope.go:117] "RemoveContainer" containerID="9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.508078 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\": container with ID starting with 9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b not found: ID does not exist" containerID="9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.508137 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b"} err="failed to get container status \"9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\": rpc error: code = NotFound desc = could not find container \"9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b\": container with ID starting with 9a847e8b051617b1fd738d918d8c4c28fbf7883aa84274e818b2a5c87d3a332b not found: ID does not exist" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.508179 4740 scope.go:117] "RemoveContainer" containerID="163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.508500 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\": container with ID starting with 163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a not found: ID does not exist" containerID="163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.508524 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a"} err="failed to get container status \"163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\": rpc error: code = NotFound desc = could not find container \"163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a\": container with ID starting with 163eaa9623ac1fb99ff0630153a700f58468d647861297b8e73b536344cc051a not found: ID does not exist" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.508541 4740 scope.go:117] "RemoveContainer" containerID="b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.509004 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\": container with ID starting with b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca not found: ID does not exist" containerID="b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.509061 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca"} err="failed to get container status \"b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\": rpc error: code = NotFound desc = could not find container \"b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca\": container with ID starting with b3a3b9645bd42008bac78684ab6770be6210bcb93cf1404f63ca0cc9f1bfbbca not found: ID does not exist" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.509098 4740 scope.go:117] "RemoveContainer" containerID="bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.509482 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\": container with ID starting with bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7 not found: ID does not exist" containerID="bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.509515 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7"} err="failed to get container status \"bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\": rpc error: code = NotFound desc = could not find container \"bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7\": container with ID starting with bae9ee10702bb7f323b2db002a777518faaf3b55b32ddf8185b389826b491ae7 not found: ID does not exist" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.509532 4740 scope.go:117] "RemoveContainer" containerID="d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a" Jan 30 16:00:18 crc kubenswrapper[4740]: E0130 16:00:18.509872 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\": container with ID starting with d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a not found: ID does not exist" containerID="d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a" Jan 30 16:00:18 crc kubenswrapper[4740]: I0130 16:00:18.509900 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a"} err="failed to get container status \"d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\": rpc error: code = NotFound desc = could not find container \"d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a\": container with ID starting with d70fca2c5e1df8c6b9833916e77c9f95e8c9d27ad4999007373f199551c0c43a not found: ID does not exist" Jan 30 16:00:19 crc kubenswrapper[4740]: E0130 16:00:19.281315 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="1.6s" Jan 30 16:00:19 crc kubenswrapper[4740]: I0130 16:00:19.342236 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 16:00:20 crc kubenswrapper[4740]: E0130 16:00:20.883306 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="3.2s" Jan 30 16:00:23 crc kubenswrapper[4740]: I0130 16:00:23.337781 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:23 crc kubenswrapper[4740]: I0130 16:00:23.338620 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:24 crc kubenswrapper[4740]: E0130 16:00:24.086067 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="6.4s" Jan 30 16:00:26 crc kubenswrapper[4740]: E0130 16:00:26.354858 4740 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.121:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" volumeName="registry-storage" Jan 30 16:00:26 crc kubenswrapper[4740]: E0130 16:00:26.751787 4740 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.121:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f8d8d2815a34c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,LastTimestamp:2026-01-30 16:00:15.982306124 +0000 UTC m=+264.619368723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.465925 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.467630 4740 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52" exitCode=1 Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.467730 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52"} Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.468951 4740 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.469026 4740 scope.go:117] "RemoveContainer" containerID="1ece4b7f88f3291b64aa1f742e55542c8b03b86da553b41d953284fb5ee13d52" Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.469734 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:30 crc kubenswrapper[4740]: I0130 16:00:30.470409 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:30 crc kubenswrapper[4740]: E0130 16:00:30.487675 4740 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.121:6443: connect: connection refused" interval="7s" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.335472 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.337021 4740 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.337787 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.338767 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.355448 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.355497 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:31 crc kubenswrapper[4740]: E0130 16:00:31.356184 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.356952 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:31 crc kubenswrapper[4740]: W0130 16:00:31.383753 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-44348621a176fd456e3b7a6b3e5a3616607fd05daf5da0298aa9cfd68e1252a8 WatchSource:0}: Error finding container 44348621a176fd456e3b7a6b3e5a3616607fd05daf5da0298aa9cfd68e1252a8: Status 404 returned error can't find the container with id 44348621a176fd456e3b7a6b3e5a3616607fd05daf5da0298aa9cfd68e1252a8 Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.478304 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"44348621a176fd456e3b7a6b3e5a3616607fd05daf5da0298aa9cfd68e1252a8"} Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.485095 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.485214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c037e45a6b941ffac34e06070f4175503876335b236d97c027f4b4a18206960"} Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.486520 4740 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.487036 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.487432 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.565703 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.570682 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.571528 4740 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.572229 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:31 crc kubenswrapper[4740]: I0130 16:00:31.572548 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.497014 4740 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e4c9da7ba1827f3ecb8952f2c283e8acb6a129b596b66e7d6a83cc28431ae7d0" exitCode=0 Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.497111 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e4c9da7ba1827f3ecb8952f2c283e8acb6a129b596b66e7d6a83cc28431ae7d0"} Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.497706 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.497746 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.498666 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.498736 4740 status_manager.go:851] "Failed to get status for pod" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:32 crc kubenswrapper[4740]: E0130 16:00:32.498849 4740 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.499217 4740 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:32 crc kubenswrapper[4740]: I0130 16:00:32.499715 4740 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.121:6443: connect: connection refused" Jan 30 16:00:33 crc kubenswrapper[4740]: I0130 16:00:33.507947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"487e491d239ef18ef0d300e39a02e74e2c75d50e49bfae450dfcf3e5935ba7e0"} Jan 30 16:00:33 crc kubenswrapper[4740]: I0130 16:00:33.508713 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0c6c58c9ed457f2b51e8a7440946a859619fdcb2b2a6a774d334b8449fec646e"} Jan 30 16:00:33 crc kubenswrapper[4740]: I0130 16:00:33.508725 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a946fa5144d12480343bbbd1b1f97e2b69d3cc954debe09b2ff374e04e9db140"} Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.516076 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c3a8e90c4f0eda452734ca33593535afa1c49b0c9456ac62e38833c5389fc3aa"} Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.516124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1120556fa7f60b3e994f5022547a922a0f69d4f8f31acc090ae27e6b263f76a9"} Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.516293 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.516505 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.516539 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:34 crc kubenswrapper[4740]: I0130 16:00:34.627001 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerName="oauth-openshift" containerID="cri-o://343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71" gracePeriod=15 Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.048085 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186458 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186519 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186552 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186600 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186686 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186706 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186753 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186806 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186847 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7dn\" (UniqueName: \"kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.186903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir\") pod \"64c5a0e0-9121-416a-b48c-219349cc9ba3\" (UID: \"64c5a0e0-9121-416a-b48c-219349cc9ba3\") " Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.187216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.188097 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.188539 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.188560 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.188833 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.228808 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn" (OuterVolumeSpecName: "kube-api-access-7r7dn") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "kube-api-access-7r7dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.237675 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.244902 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.245853 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.247714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.248037 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.248460 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.248689 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.248848 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "64c5a0e0-9121-416a-b48c-219349cc9ba3" (UID: "64c5a0e0-9121-416a-b48c-219349cc9ba3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288381 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288423 4740 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288439 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288453 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288465 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288476 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288489 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288500 4740 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288510 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288522 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288532 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288542 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288552 4740 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/64c5a0e0-9121-416a-b48c-219349cc9ba3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.288564 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7dn\" (UniqueName: \"kubernetes.io/projected/64c5a0e0-9121-416a-b48c-219349cc9ba3-kube-api-access-7r7dn\") on node \"crc\" DevicePath \"\"" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.524184 4740 generic.go:334] "Generic (PLEG): container finished" podID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerID="343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71" exitCode=0 Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.524247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" event={"ID":"64c5a0e0-9121-416a-b48c-219349cc9ba3","Type":"ContainerDied","Data":"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71"} Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.524259 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.524297 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rf9jx" event={"ID":"64c5a0e0-9121-416a-b48c-219349cc9ba3","Type":"ContainerDied","Data":"ae4d71e3522ae844e4dd59663ad89c2844759a466a2170626e75342d56406d36"} Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.524322 4740 scope.go:117] "RemoveContainer" containerID="343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.544541 4740 scope.go:117] "RemoveContainer" containerID="343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71" Jan 30 16:00:35 crc kubenswrapper[4740]: E0130 16:00:35.544991 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71\": container with ID starting with 343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71 not found: ID does not exist" containerID="343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71" Jan 30 16:00:35 crc kubenswrapper[4740]: I0130 16:00:35.545054 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71"} err="failed to get container status \"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71\": rpc error: code = NotFound desc = could not find container \"343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71\": container with ID starting with 343e73cfded8d1ee334484badcac73a3213132338efa32d096eab1ece4e54a71 not found: ID does not exist" Jan 30 16:00:36 crc kubenswrapper[4740]: I0130 16:00:36.357075 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:36 crc kubenswrapper[4740]: I0130 16:00:36.357149 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:36 crc kubenswrapper[4740]: I0130 16:00:36.364343 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:39 crc kubenswrapper[4740]: I0130 16:00:39.527628 4740 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:39 crc kubenswrapper[4740]: I0130 16:00:39.553740 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:39 crc kubenswrapper[4740]: I0130 16:00:39.553785 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:39 crc kubenswrapper[4740]: I0130 16:00:39.558936 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:39 crc kubenswrapper[4740]: I0130 16:00:39.562065 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9f3b46f-5bcc-4937-8e5e-8a703ec1b514" Jan 30 16:00:39 crc kubenswrapper[4740]: E0130 16:00:39.587262 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 30 16:00:39 crc kubenswrapper[4740]: E0130 16:00:39.993210 4740 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 30 16:00:40 crc kubenswrapper[4740]: I0130 16:00:40.564049 4740 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:40 crc kubenswrapper[4740]: I0130 16:00:40.564582 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9309f8c5-b0ed-43d8-80b2-ddd9cfd77bf5" Jan 30 16:00:43 crc kubenswrapper[4740]: I0130 16:00:43.356667 4740 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b9f3b46f-5bcc-4937-8e5e-8a703ec1b514" Jan 30 16:00:47 crc kubenswrapper[4740]: I0130 16:00:47.900016 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:00:48 crc kubenswrapper[4740]: I0130 16:00:48.722339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:00:48 crc kubenswrapper[4740]: I0130 16:00:48.985340 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:00:49 crc kubenswrapper[4740]: I0130 16:00:49.346260 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:00:49 crc kubenswrapper[4740]: I0130 16:00:49.563687 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:00:49 crc kubenswrapper[4740]: I0130 16:00:49.784625 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:00:50 crc kubenswrapper[4740]: I0130 16:00:50.364660 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:00:50 crc kubenswrapper[4740]: I0130 16:00:50.599137 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:00:50 crc kubenswrapper[4740]: I0130 16:00:50.888648 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:00:50 crc kubenswrapper[4740]: I0130 16:00:50.942251 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.053311 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.144220 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.241446 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.334466 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.352742 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.666326 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.755532 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.755768 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.809184 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:00:51 crc kubenswrapper[4740]: I0130 16:00:51.992135 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.341862 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.376818 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.440496 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.468120 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.470226 4740 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.623172 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.673275 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.678336 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.884165 4740 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.903323 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:00:52 crc kubenswrapper[4740]: I0130 16:00:52.915777 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.011938 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.127031 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.225952 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.232968 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.247985 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.252967 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.303785 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.306510 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.334589 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.358919 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.395169 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.626274 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.627192 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.918854 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.979760 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:00:53 crc kubenswrapper[4740]: I0130 16:00:53.997798 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.061855 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.067108 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.067335 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.079057 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.091785 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.103296 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.121017 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.133665 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.139106 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.218621 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.294303 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.309244 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.341229 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.531405 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.601404 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.679149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.688621 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.720627 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.721378 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.736492 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.785581 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.788901 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.814810 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.834559 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.902572 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.934914 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:00:54 crc kubenswrapper[4740]: I0130 16:00:54.966643 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.003164 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.078957 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.111195 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.291719 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.340388 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.346706 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.384238 4740 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.387968 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.387942824 podStartE2EDuration="40.387942824s" podCreationTimestamp="2026-01-30 16:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:00:39.32448837 +0000 UTC m=+287.961550979" watchObservedRunningTime="2026-01-30 16:00:55.387942824 +0000 UTC m=+304.025005433" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.390733 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-rf9jx"] Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.390808 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.392115 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.395750 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.409632 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.412336 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.412315201 podStartE2EDuration="16.412315201s" podCreationTimestamp="2026-01-30 16:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:00:55.411708026 +0000 UTC m=+304.048770635" watchObservedRunningTime="2026-01-30 16:00:55.412315201 +0000 UTC m=+304.049377810" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.496067 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.505076 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.614310 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.646880 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.779822 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.909889 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.941178 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:00:55 crc kubenswrapper[4740]: I0130 16:00:55.957155 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.012245 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.031810 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.034995 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.171844 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.256271 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389005 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5589bfb64c-jztbw"] Jan 30 16:00:56 crc kubenswrapper[4740]: E0130 16:00:56.389246 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerName="oauth-openshift" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389261 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerName="oauth-openshift" Jan 30 16:00:56 crc kubenswrapper[4740]: E0130 16:00:56.389279 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" containerName="installer" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389288 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" containerName="installer" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389424 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="256ad354-d723-4e72-bfcb-7ea85487109a" containerName="installer" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389445 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" containerName="oauth-openshift" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.389887 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.392600 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.394955 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.395237 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.396630 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.397279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.397997 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.398240 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.398473 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.398561 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.399124 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.399149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.406575 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.410252 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.415575 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.419226 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.435488 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.446310 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523026 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523108 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsq8\" (UniqueName: \"kubernetes.io/projected/330e8781-2983-4930-8960-199e6829a8c7-kube-api-access-shsq8\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-session\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523489 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.523993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.524027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.524084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.524165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330e8781-2983-4930-8960-199e6829a8c7-audit-dir\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.524222 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.524254 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-audit-policies\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.573589 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.578171 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.579388 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.585700 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.607634 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625141 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330e8781-2983-4930-8960-199e6829a8c7-audit-dir\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-audit-policies\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625296 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsq8\" (UniqueName: \"kubernetes.io/projected/330e8781-2983-4930-8960-199e6829a8c7-kube-api-access-shsq8\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-session\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625397 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625582 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625644 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.625694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.626738 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330e8781-2983-4930-8960-199e6829a8c7-audit-dir\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.627317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.629400 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.629424 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-audit-policies\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.629548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.635619 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.636739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.636933 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.637133 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.637243 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.637501 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-session\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.639169 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.648607 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/330e8781-2983-4930-8960-199e6829a8c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.661777 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsq8\" (UniqueName: \"kubernetes.io/projected/330e8781-2983-4930-8960-199e6829a8c7-kube-api-access-shsq8\") pod \"oauth-openshift-5589bfb64c-jztbw\" (UID: \"330e8781-2983-4930-8960-199e6829a8c7\") " pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.695903 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.710996 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.721274 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.782645 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.799614 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.873691 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.935412 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.957089 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:00:56 crc kubenswrapper[4740]: I0130 16:00:56.976809 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.028824 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.039731 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.045666 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.103551 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.219442 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.349132 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.350188 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c5a0e0-9121-416a-b48c-219349cc9ba3" path="/var/lib/kubelet/pods/64c5a0e0-9121-416a-b48c-219349cc9ba3/volumes" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.396455 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.401052 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.503659 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.542671 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.661371 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.673784 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:00:57 crc kubenswrapper[4740]: I0130 16:00:57.950279 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.067336 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.074514 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.137907 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.175694 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.221598 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.296317 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.317165 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.320123 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.455534 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.463015 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.470528 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.556184 4740 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.771025 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.772447 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.790164 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.909185 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:00:58 crc kubenswrapper[4740]: I0130 16:00:58.919122 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.001469 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.039552 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5589bfb64c-jztbw"] Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.040997 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.047787 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.209004 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.321909 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.349397 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.386487 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.394237 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.442105 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.471309 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.613831 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.656878 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:00:59 crc kubenswrapper[4740]: E0130 16:00:59.659493 4740 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 16:00:59 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96" Netns:"/var/run/netns/6034da21-42e0-4c47-9098-c18a7b66a22a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:00:59 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:00:59 crc kubenswrapper[4740]: > Jan 30 16:00:59 crc kubenswrapper[4740]: E0130 16:00:59.659576 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 16:00:59 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96" Netns:"/var/run/netns/6034da21-42e0-4c47-9098-c18a7b66a22a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:00:59 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:00:59 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:59 crc kubenswrapper[4740]: E0130 16:00:59.659602 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 16:00:59 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96" Netns:"/var/run/netns/6034da21-42e0-4c47-9098-c18a7b66a22a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:00:59 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:00:59 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:59 crc kubenswrapper[4740]: E0130 16:00:59.659666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96\\\" Netns:\\\"/var/run/netns/6034da21-42e0-4c47-9098-c18a7b66a22a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=4a528cbfec2ec70192d078b45070e58c09f0ab7e9ef00792f321eddc1fcedb96;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod \\\"oauth-openshift-5589bfb64c-jztbw\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" podUID="330e8781-2983-4930-8960-199e6829a8c7" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.695577 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.696178 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.851870 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.954821 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:00:59 crc kubenswrapper[4740]: I0130 16:00:59.974052 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.036983 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.038822 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.144058 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.149341 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.225802 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.260585 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.295162 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.359574 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.494085 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.508642 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.636742 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.639962 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.647286 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.649796 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.740963 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.751169 4740 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.757381 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.808183 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:01:00 crc kubenswrapper[4740]: I0130 16:01:00.848149 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.012157 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.080518 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.105258 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.176812 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.182884 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.205152 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.248472 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.339404 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.343492 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.563826 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.603509 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.748336 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.789549 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.849903 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.851179 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.868631 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.877651 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:01:01 crc kubenswrapper[4740]: I0130 16:01:01.989841 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.016216 4740 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.016613 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541" gracePeriod=5 Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.036168 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.049601 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.104771 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.282457 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.290389 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.297663 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.314520 4740 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.361949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.638952 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.774714 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:01:02 crc kubenswrapper[4740]: E0130 16:01:02.807331 4740 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 16:01:02 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b" Netns:"/var/run/netns/84397b35-0c08-4a47-b1c4-9e8ccadb16f2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:02 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:02 crc kubenswrapper[4740]: > Jan 30 16:01:02 crc kubenswrapper[4740]: E0130 16:01:02.807435 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 16:01:02 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b" Netns:"/var/run/netns/84397b35-0c08-4a47-b1c4-9e8ccadb16f2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:02 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:02 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:02 crc kubenswrapper[4740]: E0130 16:01:02.807458 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 16:01:02 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b" Netns:"/var/run/netns/84397b35-0c08-4a47-b1c4-9e8ccadb16f2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:02 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:02 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:02 crc kubenswrapper[4740]: E0130 16:01:02.807521 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b\\\" Netns:\\\"/var/run/netns/84397b35-0c08-4a47-b1c4-9e8ccadb16f2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=04159bb9658e0bbfa639267aaf8701d7dbf16e5f26f79aca0e81972deca83f0b;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod \\\"oauth-openshift-5589bfb64c-jztbw\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" podUID="330e8781-2983-4930-8960-199e6829a8c7" Jan 30 16:01:02 crc kubenswrapper[4740]: I0130 16:01:02.886297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.191827 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.216249 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.341144 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.457416 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.542652 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.633875 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.646646 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.905421 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:01:03 crc kubenswrapper[4740]: I0130 16:01:03.940959 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.081887 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.128995 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.388003 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.467513 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.605473 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.740671 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:01:04 crc kubenswrapper[4740]: I0130 16:01:04.934883 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:01:05 crc kubenswrapper[4740]: I0130 16:01:05.352603 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:01:05 crc kubenswrapper[4740]: I0130 16:01:05.398865 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:01:05 crc kubenswrapper[4740]: I0130 16:01:05.499106 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.612443 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.612942 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.757591 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.758009 4740 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541" exitCode=137 Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.758183 4740 scope.go:117] "RemoveContainer" containerID="b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.758961 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.784715 4740 scope.go:117] "RemoveContainer" containerID="b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541" Jan 30 16:01:07 crc kubenswrapper[4740]: E0130 16:01:07.785449 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541\": container with ID starting with b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541 not found: ID does not exist" containerID="b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.785512 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541"} err="failed to get container status \"b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541\": rpc error: code = NotFound desc = could not find container \"b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541\": container with ID starting with b661c50646860a602420d8293e3f59ee92003ea50ee3f6f667ec946151e89541 not found: ID does not exist" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796261 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796340 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796447 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796517 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796573 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796629 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.796824 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.797231 4740 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.797253 4740 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.797265 4740 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.797279 4740 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.815249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:01:07 crc kubenswrapper[4740]: I0130 16:01:07.898636 4740 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.347758 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.348253 4740 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.363513 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.363554 4740 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="48e00694-a5d6-43eb-a987-ee189d283dc3" Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.370956 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:01:09 crc kubenswrapper[4740]: I0130 16:01:09.371019 4740 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="48e00694-a5d6-43eb-a987-ee189d283dc3" Jan 30 16:01:13 crc kubenswrapper[4740]: I0130 16:01:13.335655 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:13 crc kubenswrapper[4740]: I0130 16:01:13.342180 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:16 crc kubenswrapper[4740]: E0130 16:01:16.266579 4740 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 16:01:16 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008" Netns:"/var/run/netns/5cf29349-e721-483f-a6ff-f7c51981fe36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:16 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:16 crc kubenswrapper[4740]: > Jan 30 16:01:16 crc kubenswrapper[4740]: E0130 16:01:16.267035 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 16:01:16 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008" Netns:"/var/run/netns/5cf29349-e721-483f-a6ff-f7c51981fe36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:16 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:16 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:16 crc kubenswrapper[4740]: E0130 16:01:16.267055 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 16:01:16 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008" Netns:"/var/run/netns/5cf29349-e721-483f-a6ff-f7c51981fe36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod "oauth-openshift-5589bfb64c-jztbw" not found Jan 30 16:01:16 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:16 crc kubenswrapper[4740]: > pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:16 crc kubenswrapper[4740]: E0130 16:01:16.267122 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5589bfb64c-jztbw_openshift-authentication(330e8781-2983-4930-8960-199e6829a8c7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5589bfb64c-jztbw_openshift-authentication_330e8781-2983-4930-8960-199e6829a8c7_0(0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008): error adding pod openshift-authentication_oauth-openshift-5589bfb64c-jztbw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008\\\" Netns:\\\"/var/run/netns/5cf29349-e721-483f-a6ff-f7c51981fe36\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5589bfb64c-jztbw;K8S_POD_INFRA_CONTAINER_ID=0feb2a61a42f54c064333fd2bd420aa12188f3d9e9b4c74399ffbc502d38e008;K8S_POD_UID=330e8781-2983-4930-8960-199e6829a8c7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5589bfb64c-jztbw] networking: Multus: [openshift-authentication/oauth-openshift-5589bfb64c-jztbw/330e8781-2983-4930-8960-199e6829a8c7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5589bfb64c-jztbw in out of cluster comm: pod \\\"oauth-openshift-5589bfb64c-jztbw\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" podUID="330e8781-2983-4930-8960-199e6829a8c7" Jan 30 16:01:16 crc kubenswrapper[4740]: I0130 16:01:16.331638 4740 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.525076 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.680461 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.681236 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" podUID="7cd9a390-47d5-45d2-af3e-35b9d551249b" containerName="route-controller-manager" containerID="cri-o://27d8d81708bf5f810429a8cf02d767efb87b0c360cd270fce761bd5b596e94cb" gracePeriod=30 Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.691857 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.692264 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" podUID="e6762529-2483-438a-be25-61b823fc41f1" containerName="controller-manager" containerID="cri-o://5f9033091bf8de3885892d4431e68269499fc1b1d63c740c7c159ed78c13e85e" gracePeriod=30 Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.849567 4740 generic.go:334] "Generic (PLEG): container finished" podID="7cd9a390-47d5-45d2-af3e-35b9d551249b" containerID="27d8d81708bf5f810429a8cf02d767efb87b0c360cd270fce761bd5b596e94cb" exitCode=0 Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.849640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" event={"ID":"7cd9a390-47d5-45d2-af3e-35b9d551249b","Type":"ContainerDied","Data":"27d8d81708bf5f810429a8cf02d767efb87b0c360cd270fce761bd5b596e94cb"} Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.851089 4740 generic.go:334] "Generic (PLEG): container finished" podID="e6762529-2483-438a-be25-61b823fc41f1" containerID="5f9033091bf8de3885892d4431e68269499fc1b1d63c740c7c159ed78c13e85e" exitCode=0 Jan 30 16:01:18 crc kubenswrapper[4740]: I0130 16:01:18.851117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" event={"ID":"e6762529-2483-438a-be25-61b823fc41f1","Type":"ContainerDied","Data":"5f9033091bf8de3885892d4431e68269499fc1b1d63c740c7c159ed78c13e85e"} Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.103123 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.107252 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.266835 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhk6\" (UniqueName: \"kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6\") pod \"e6762529-2483-438a-be25-61b823fc41f1\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.266897 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles\") pod \"e6762529-2483-438a-be25-61b823fc41f1\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.266966 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca\") pod \"e6762529-2483-438a-be25-61b823fc41f1\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.266998 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config\") pod \"7cd9a390-47d5-45d2-af3e-35b9d551249b\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.267046 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config\") pod \"e6762529-2483-438a-be25-61b823fc41f1\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.267081 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert\") pod \"7cd9a390-47d5-45d2-af3e-35b9d551249b\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.267192 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert\") pod \"e6762529-2483-438a-be25-61b823fc41f1\" (UID: \"e6762529-2483-438a-be25-61b823fc41f1\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.267212 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rxfj\" (UniqueName: \"kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj\") pod \"7cd9a390-47d5-45d2-af3e-35b9d551249b\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.267254 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca\") pod \"7cd9a390-47d5-45d2-af3e-35b9d551249b\" (UID: \"7cd9a390-47d5-45d2-af3e-35b9d551249b\") " Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.268839 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6762529-2483-438a-be25-61b823fc41f1" (UID: "e6762529-2483-438a-be25-61b823fc41f1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.268929 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6762529-2483-438a-be25-61b823fc41f1" (UID: "e6762529-2483-438a-be25-61b823fc41f1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.268942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config" (OuterVolumeSpecName: "config") pod "e6762529-2483-438a-be25-61b823fc41f1" (UID: "e6762529-2483-438a-be25-61b823fc41f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.269279 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca" (OuterVolumeSpecName: "client-ca") pod "7cd9a390-47d5-45d2-af3e-35b9d551249b" (UID: "7cd9a390-47d5-45d2-af3e-35b9d551249b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.270130 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config" (OuterVolumeSpecName: "config") pod "7cd9a390-47d5-45d2-af3e-35b9d551249b" (UID: "7cd9a390-47d5-45d2-af3e-35b9d551249b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.273847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6762529-2483-438a-be25-61b823fc41f1" (UID: "e6762529-2483-438a-be25-61b823fc41f1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.274011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7cd9a390-47d5-45d2-af3e-35b9d551249b" (UID: "7cd9a390-47d5-45d2-af3e-35b9d551249b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.274669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj" (OuterVolumeSpecName: "kube-api-access-9rxfj") pod "7cd9a390-47d5-45d2-af3e-35b9d551249b" (UID: "7cd9a390-47d5-45d2-af3e-35b9d551249b"). InnerVolumeSpecName "kube-api-access-9rxfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.274716 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6" (OuterVolumeSpecName: "kube-api-access-2dhk6") pod "e6762529-2483-438a-be25-61b823fc41f1" (UID: "e6762529-2483-438a-be25-61b823fc41f1"). InnerVolumeSpecName "kube-api-access-2dhk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368480 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368529 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368550 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368568 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cd9a390-47d5-45d2-af3e-35b9d551249b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368589 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6762529-2483-438a-be25-61b823fc41f1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368610 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rxfj\" (UniqueName: \"kubernetes.io/projected/7cd9a390-47d5-45d2-af3e-35b9d551249b-kube-api-access-9rxfj\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368631 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7cd9a390-47d5-45d2-af3e-35b9d551249b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368651 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhk6\" (UniqueName: \"kubernetes.io/projected/e6762529-2483-438a-be25-61b823fc41f1-kube-api-access-2dhk6\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.368670 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6762529-2483-438a-be25-61b823fc41f1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.862336 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" event={"ID":"7cd9a390-47d5-45d2-af3e-35b9d551249b","Type":"ContainerDied","Data":"e423051169724b09d9dad9dc0022611a49643580e0132499d4d3b7591febc265"} Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.862462 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.862470 4740 scope.go:117] "RemoveContainer" containerID="27d8d81708bf5f810429a8cf02d767efb87b0c360cd270fce761bd5b596e94cb" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.867168 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" event={"ID":"e6762529-2483-438a-be25-61b823fc41f1","Type":"ContainerDied","Data":"4280938db1670d126f5cb9d6e36d828b0f2d755ca5910da8c21d7fa042e141c1"} Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.867400 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98cc8f9d-5h7qg" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.880440 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:19 crc kubenswrapper[4740]: E0130 16:01:19.880908 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd9a390-47d5-45d2-af3e-35b9d551249b" containerName="route-controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.880940 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd9a390-47d5-45d2-af3e-35b9d551249b" containerName="route-controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: E0130 16:01:19.880978 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6762529-2483-438a-be25-61b823fc41f1" containerName="controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.880996 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6762529-2483-438a-be25-61b823fc41f1" containerName="controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: E0130 16:01:19.881035 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.881054 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.881312 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6762529-2483-438a-be25-61b823fc41f1" containerName="controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.881335 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd9a390-47d5-45d2-af3e-35b9d551249b" containerName="route-controller-manager" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.881407 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.882199 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.889744 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.890420 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.890024 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.889745 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.890137 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.890301 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.902503 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.919782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.923143 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.926819 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.927069 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.927880 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.930260 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.932215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.932603 4740 scope.go:117] "RemoveContainer" containerID="5f9033091bf8de3885892d4431e68269499fc1b1d63c740c7c159ed78c13e85e" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.946673 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.951112 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.954624 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.964188 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.977142 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-864f7bcc8f-7kjmn"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.985633 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 16:01:19 crc kubenswrapper[4740]: I0130 16:01:19.990341 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-98cc8f9d-5h7qg"] Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079510 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6clm\" (UniqueName: \"kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079565 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079719 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmbtx\" (UniqueName: \"kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079746 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079768 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079872 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079913 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.079953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.180912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmbtx\" (UniqueName: \"kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.180981 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181019 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181229 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181304 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6clm\" (UniqueName: \"kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181433 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.181493 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.182979 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.183148 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.183242 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.183688 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.184024 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.189462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.189654 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.210880 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6clm\" (UniqueName: \"kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm\") pod \"route-controller-manager-7bdff68fbf-h2bzn\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.212070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmbtx\" (UniqueName: \"kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx\") pod \"controller-manager-6b9bb5cb5d-m2kqh\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.213749 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.219342 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.234257 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:01:20 crc kubenswrapper[4740]: I0130 16:01:20.254990 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:21 crc kubenswrapper[4740]: I0130 16:01:21.348297 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd9a390-47d5-45d2-af3e-35b9d551249b" path="/var/lib/kubelet/pods/7cd9a390-47d5-45d2-af3e-35b9d551249b/volumes" Jan 30 16:01:21 crc kubenswrapper[4740]: I0130 16:01:21.350196 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6762529-2483-438a-be25-61b823fc41f1" path="/var/lib/kubelet/pods/e6762529-2483-438a-be25-61b823fc41f1/volumes" Jan 30 16:01:22 crc kubenswrapper[4740]: I0130 16:01:22.582477 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.228656 4740 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager_9974f6a5-4efc-434c-a308-cd2c28d7cc32_0(1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999): error adding pod openshift-route-controller-manager_route-controller-manager-7bdff68fbf-h2bzn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999" Netns:"/var/run/netns/79c61821-3d20-411d-a4f7-1b87f057dfb9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7bdff68fbf-h2bzn;K8S_POD_INFRA_CONTAINER_ID=1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999;K8S_POD_UID=9974f6a5-4efc-434c-a308-cd2c28d7cc32" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn/9974f6a5-4efc-434c-a308-cd2c28d7cc32]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7bdff68fbf-h2bzn in out of cluster comm: pod "route-controller-manager-7bdff68fbf-h2bzn" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.229535 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager_9974f6a5-4efc-434c-a308-cd2c28d7cc32_0(1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999): error adding pod openshift-route-controller-manager_route-controller-manager-7bdff68fbf-h2bzn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999" Netns:"/var/run/netns/79c61821-3d20-411d-a4f7-1b87f057dfb9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7bdff68fbf-h2bzn;K8S_POD_INFRA_CONTAINER_ID=1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999;K8S_POD_UID=9974f6a5-4efc-434c-a308-cd2c28d7cc32" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn/9974f6a5-4efc-434c-a308-cd2c28d7cc32]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7bdff68fbf-h2bzn in out of cluster comm: pod "route-controller-manager-7bdff68fbf-h2bzn" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.229591 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager_9974f6a5-4efc-434c-a308-cd2c28d7cc32_0(1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999): error adding pod openshift-route-controller-manager_route-controller-manager-7bdff68fbf-h2bzn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999" Netns:"/var/run/netns/79c61821-3d20-411d-a4f7-1b87f057dfb9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7bdff68fbf-h2bzn;K8S_POD_INFRA_CONTAINER_ID=1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999;K8S_POD_UID=9974f6a5-4efc-434c-a308-cd2c28d7cc32" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn/9974f6a5-4efc-434c-a308-cd2c28d7cc32]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7bdff68fbf-h2bzn in out of cluster comm: pod "route-controller-manager-7bdff68fbf-h2bzn" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.229751 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager(9974f6a5-4efc-434c-a308-cd2c28d7cc32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager(9974f6a5-4efc-434c-a308-cd2c28d7cc32)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-7bdff68fbf-h2bzn_openshift-route-controller-manager_9974f6a5-4efc-434c-a308-cd2c28d7cc32_0(1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999): error adding pod openshift-route-controller-manager_route-controller-manager-7bdff68fbf-h2bzn to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999\\\" Netns:\\\"/var/run/netns/79c61821-3d20-411d-a4f7-1b87f057dfb9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-7bdff68fbf-h2bzn;K8S_POD_INFRA_CONTAINER_ID=1fc35d0aa3dc2b1692affbc15e8d8e02c4763d6d9ff9e53a96016f62571eb999;K8S_POD_UID=9974f6a5-4efc-434c-a308-cd2c28d7cc32\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn/9974f6a5-4efc-434c-a308-cd2c28d7cc32]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-7bdff68fbf-h2bzn in out of cluster comm: pod \\\"route-controller-manager-7bdff68fbf-h2bzn\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.235163 4740 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager_902ddbe5-8668-45bd-a6f4-2466b604e867_0(90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409): error adding pod openshift-controller-manager_controller-manager-6b9bb5cb5d-m2kqh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409" Netns:"/var/run/netns/9c10d17c-526b-492c-8a05-d6a4dc8bcb55" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b9bb5cb5d-m2kqh;K8S_POD_INFRA_CONTAINER_ID=90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409;K8S_POD_UID=902ddbe5-8668-45bd-a6f4-2466b604e867" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh] networking: Multus: [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh/902ddbe5-8668-45bd-a6f4-2466b604e867]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-6b9bb5cb5d-m2kqh in out of cluster comm: pod "controller-manager-6b9bb5cb5d-m2kqh" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.235260 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager_902ddbe5-8668-45bd-a6f4-2466b604e867_0(90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409): error adding pod openshift-controller-manager_controller-manager-6b9bb5cb5d-m2kqh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409" Netns:"/var/run/netns/9c10d17c-526b-492c-8a05-d6a4dc8bcb55" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b9bb5cb5d-m2kqh;K8S_POD_INFRA_CONTAINER_ID=90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409;K8S_POD_UID=902ddbe5-8668-45bd-a6f4-2466b604e867" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh] networking: Multus: [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh/902ddbe5-8668-45bd-a6f4-2466b604e867]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-6b9bb5cb5d-m2kqh in out of cluster comm: pod "controller-manager-6b9bb5cb5d-m2kqh" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.235293 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 16:01:23 crc kubenswrapper[4740]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager_902ddbe5-8668-45bd-a6f4-2466b604e867_0(90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409): error adding pod openshift-controller-manager_controller-manager-6b9bb5cb5d-m2kqh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409" Netns:"/var/run/netns/9c10d17c-526b-492c-8a05-d6a4dc8bcb55" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b9bb5cb5d-m2kqh;K8S_POD_INFRA_CONTAINER_ID=90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409;K8S_POD_UID=902ddbe5-8668-45bd-a6f4-2466b604e867" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh] networking: Multus: [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh/902ddbe5-8668-45bd-a6f4-2466b604e867]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-6b9bb5cb5d-m2kqh in out of cluster comm: pod "controller-manager-6b9bb5cb5d-m2kqh" not found Jan 30 16:01:23 crc kubenswrapper[4740]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 16:01:23 crc kubenswrapper[4740]: > pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:23 crc kubenswrapper[4740]: E0130 16:01:23.235413 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager(902ddbe5-8668-45bd-a6f4-2466b604e867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager(902ddbe5-8668-45bd-a6f4-2466b604e867)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b9bb5cb5d-m2kqh_openshift-controller-manager_902ddbe5-8668-45bd-a6f4-2466b604e867_0(90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409): error adding pod openshift-controller-manager_controller-manager-6b9bb5cb5d-m2kqh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409\\\" Netns:\\\"/var/run/netns/9c10d17c-526b-492c-8a05-d6a4dc8bcb55\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b9bb5cb5d-m2kqh;K8S_POD_INFRA_CONTAINER_ID=90a3c85f778410c666949dcdd6021d6842d0c6e8474c4b6a5b56622f7cafa409;K8S_POD_UID=902ddbe5-8668-45bd-a6f4-2466b604e867\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh] networking: Multus: [openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh/902ddbe5-8668-45bd-a6f4-2466b604e867]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-6b9bb5cb5d-m2kqh in out of cluster comm: pod \\\"controller-manager-6b9bb5cb5d-m2kqh\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" Jan 30 16:01:23 crc kubenswrapper[4740]: I0130 16:01:23.899338 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:23 crc kubenswrapper[4740]: I0130 16:01:23.899507 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:23 crc kubenswrapper[4740]: I0130 16:01:23.900282 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:23 crc kubenswrapper[4740]: I0130 16:01:23.900469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:24 crc kubenswrapper[4740]: I0130 16:01:24.898973 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:01:24 crc kubenswrapper[4740]: I0130 16:01:24.999453 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.048841 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:25 crc kubenswrapper[4740]: W0130 16:01:25.054677 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod902ddbe5_8668_45bd_a6f4_2466b604e867.slice/crio-6e8e4ed655733936e9965e6d88e0604c2c32cde2d46028414bf551c687e07c1c WatchSource:0}: Error finding container 6e8e4ed655733936e9965e6d88e0604c2c32cde2d46028414bf551c687e07c1c: Status 404 returned error can't find the container with id 6e8e4ed655733936e9965e6d88e0604c2c32cde2d46028414bf551c687e07c1c Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.926562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" event={"ID":"9974f6a5-4efc-434c-a308-cd2c28d7cc32","Type":"ContainerStarted","Data":"79dedb5be9dfbb6f5df8b09d48819283fe61ac285a53a9a90b913d3100754a40"} Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.926630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" event={"ID":"9974f6a5-4efc-434c-a308-cd2c28d7cc32","Type":"ContainerStarted","Data":"a10eb6b2e1a4f58fe011373085992811aa2164dae6c79c333c48e29f7e0d201f"} Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.926964 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.928327 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" event={"ID":"902ddbe5-8668-45bd-a6f4-2466b604e867","Type":"ContainerStarted","Data":"b193eafeaf6b221ba92b2b05302d04b98a881f123fdf97c97da42d18bc38bf36"} Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.928373 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" event={"ID":"902ddbe5-8668-45bd-a6f4-2466b604e867","Type":"ContainerStarted","Data":"6e8e4ed655733936e9965e6d88e0604c2c32cde2d46028414bf551c687e07c1c"} Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.928614 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.933187 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.935404 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.950713 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" podStartSLOduration=7.95068582 podStartE2EDuration="7.95068582s" podCreationTimestamp="2026-01-30 16:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:01:25.946877864 +0000 UTC m=+334.583940483" watchObservedRunningTime="2026-01-30 16:01:25.95068582 +0000 UTC m=+334.587748449" Jan 30 16:01:25 crc kubenswrapper[4740]: I0130 16:01:25.991838 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" podStartSLOduration=7.991817651 podStartE2EDuration="7.991817651s" podCreationTimestamp="2026-01-30 16:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:01:25.967615679 +0000 UTC m=+334.604678298" watchObservedRunningTime="2026-01-30 16:01:25.991817651 +0000 UTC m=+334.628880250" Jan 30 16:01:27 crc kubenswrapper[4740]: I0130 16:01:27.335153 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:27 crc kubenswrapper[4740]: I0130 16:01:27.335788 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:27 crc kubenswrapper[4740]: I0130 16:01:27.801173 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5589bfb64c-jztbw"] Jan 30 16:01:27 crc kubenswrapper[4740]: W0130 16:01:27.802760 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod330e8781_2983_4930_8960_199e6829a8c7.slice/crio-4f382dd092e9001b2c6d5e77afedaa2a7dd0752204c441a319d57de3ebbde97d WatchSource:0}: Error finding container 4f382dd092e9001b2c6d5e77afedaa2a7dd0752204c441a319d57de3ebbde97d: Status 404 returned error can't find the container with id 4f382dd092e9001b2c6d5e77afedaa2a7dd0752204c441a319d57de3ebbde97d Jan 30 16:01:27 crc kubenswrapper[4740]: I0130 16:01:27.944778 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" event={"ID":"330e8781-2983-4930-8960-199e6829a8c7","Type":"ContainerStarted","Data":"4f382dd092e9001b2c6d5e77afedaa2a7dd0752204c441a319d57de3ebbde97d"} Jan 30 16:01:28 crc kubenswrapper[4740]: I0130 16:01:28.918540 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:01:28 crc kubenswrapper[4740]: I0130 16:01:28.953677 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" event={"ID":"330e8781-2983-4930-8960-199e6829a8c7","Type":"ContainerStarted","Data":"9f682d68c268b44495cd216c7f831a054143077e0f1d9f62df57dcdc805b9bb4"} Jan 30 16:01:28 crc kubenswrapper[4740]: I0130 16:01:28.954193 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:28 crc kubenswrapper[4740]: I0130 16:01:28.962650 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" Jan 30 16:01:28 crc kubenswrapper[4740]: I0130 16:01:28.987816 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5589bfb64c-jztbw" podStartSLOduration=79.987778106 podStartE2EDuration="1m19.987778106s" podCreationTimestamp="2026-01-30 16:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:01:28.985760765 +0000 UTC m=+337.622823404" watchObservedRunningTime="2026-01-30 16:01:28.987778106 +0000 UTC m=+337.624840745" Jan 30 16:01:32 crc kubenswrapper[4740]: I0130 16:01:32.504170 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:01:33 crc kubenswrapper[4740]: I0130 16:01:33.122244 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:01:34 crc kubenswrapper[4740]: I0130 16:01:34.138661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:01:34 crc kubenswrapper[4740]: I0130 16:01:34.487501 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:01:36 crc kubenswrapper[4740]: I0130 16:01:36.323333 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:01:38 crc kubenswrapper[4740]: I0130 16:01:38.660987 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:38 crc kubenswrapper[4740]: I0130 16:01:38.661737 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" containerName="controller-manager" containerID="cri-o://b193eafeaf6b221ba92b2b05302d04b98a881f123fdf97c97da42d18bc38bf36" gracePeriod=30 Jan 30 16:01:38 crc kubenswrapper[4740]: I0130 16:01:38.689489 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:38 crc kubenswrapper[4740]: I0130 16:01:38.689936 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" containerName="route-controller-manager" containerID="cri-o://79dedb5be9dfbb6f5df8b09d48819283fe61ac285a53a9a90b913d3100754a40" gracePeriod=30 Jan 30 16:01:38 crc kubenswrapper[4740]: I0130 16:01:38.770675 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.020710 4740 generic.go:334] "Generic (PLEG): container finished" podID="902ddbe5-8668-45bd-a6f4-2466b604e867" containerID="b193eafeaf6b221ba92b2b05302d04b98a881f123fdf97c97da42d18bc38bf36" exitCode=0 Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.020776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" event={"ID":"902ddbe5-8668-45bd-a6f4-2466b604e867","Type":"ContainerDied","Data":"b193eafeaf6b221ba92b2b05302d04b98a881f123fdf97c97da42d18bc38bf36"} Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.022178 4740 generic.go:334] "Generic (PLEG): container finished" podID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" containerID="79dedb5be9dfbb6f5df8b09d48819283fe61ac285a53a9a90b913d3100754a40" exitCode=0 Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.022200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" event={"ID":"9974f6a5-4efc-434c-a308-cd2c28d7cc32","Type":"ContainerDied","Data":"79dedb5be9dfbb6f5df8b09d48819283fe61ac285a53a9a90b913d3100754a40"} Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.184301 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.247657 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.282873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca\") pod \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.282924 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca\") pod \"902ddbe5-8668-45bd-a6f4-2466b604e867\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.282948 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6clm\" (UniqueName: \"kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm\") pod \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.282971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles\") pod \"902ddbe5-8668-45bd-a6f4-2466b604e867\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.282996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config\") pod \"902ddbe5-8668-45bd-a6f4-2466b604e867\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.283036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config\") pod \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.283059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmbtx\" (UniqueName: \"kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx\") pod \"902ddbe5-8668-45bd-a6f4-2466b604e867\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.283078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert\") pod \"902ddbe5-8668-45bd-a6f4-2466b604e867\" (UID: \"902ddbe5-8668-45bd-a6f4-2466b604e867\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.283097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert\") pod \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\" (UID: \"9974f6a5-4efc-434c-a308-cd2c28d7cc32\") " Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.283823 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca" (OuterVolumeSpecName: "client-ca") pod "902ddbe5-8668-45bd-a6f4-2466b604e867" (UID: "902ddbe5-8668-45bd-a6f4-2466b604e867"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.284046 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config" (OuterVolumeSpecName: "config") pod "902ddbe5-8668-45bd-a6f4-2466b604e867" (UID: "902ddbe5-8668-45bd-a6f4-2466b604e867"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.284443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca" (OuterVolumeSpecName: "client-ca") pod "9974f6a5-4efc-434c-a308-cd2c28d7cc32" (UID: "9974f6a5-4efc-434c-a308-cd2c28d7cc32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.284608 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "902ddbe5-8668-45bd-a6f4-2466b604e867" (UID: "902ddbe5-8668-45bd-a6f4-2466b604e867"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.284626 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config" (OuterVolumeSpecName: "config") pod "9974f6a5-4efc-434c-a308-cd2c28d7cc32" (UID: "9974f6a5-4efc-434c-a308-cd2c28d7cc32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.290017 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9974f6a5-4efc-434c-a308-cd2c28d7cc32" (UID: "9974f6a5-4efc-434c-a308-cd2c28d7cc32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.291430 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx" (OuterVolumeSpecName: "kube-api-access-tmbtx") pod "902ddbe5-8668-45bd-a6f4-2466b604e867" (UID: "902ddbe5-8668-45bd-a6f4-2466b604e867"). InnerVolumeSpecName "kube-api-access-tmbtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.292521 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "902ddbe5-8668-45bd-a6f4-2466b604e867" (UID: "902ddbe5-8668-45bd-a6f4-2466b604e867"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.290939 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm" (OuterVolumeSpecName: "kube-api-access-h6clm") pod "9974f6a5-4efc-434c-a308-cd2c28d7cc32" (UID: "9974f6a5-4efc-434c-a308-cd2c28d7cc32"). InnerVolumeSpecName "kube-api-access-h6clm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385133 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385195 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385219 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385239 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmbtx\" (UniqueName: \"kubernetes.io/projected/902ddbe5-8668-45bd-a6f4-2466b604e867-kube-api-access-tmbtx\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385260 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/902ddbe5-8668-45bd-a6f4-2466b604e867-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385277 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9974f6a5-4efc-434c-a308-cd2c28d7cc32-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385293 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9974f6a5-4efc-434c-a308-cd2c28d7cc32-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385309 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/902ddbe5-8668-45bd-a6f4-2466b604e867-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.385326 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6clm\" (UniqueName: \"kubernetes.io/projected/9974f6a5-4efc-434c-a308-cd2c28d7cc32-kube-api-access-h6clm\") on node \"crc\" DevicePath \"\"" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.880151 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:01:39 crc kubenswrapper[4740]: E0130 16:01:39.880472 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" containerName="route-controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.880490 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" containerName="route-controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: E0130 16:01:39.880509 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" containerName="controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.880516 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" containerName="controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.880614 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" containerName="controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.880636 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" containerName="route-controller-manager" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.881077 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.891609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.891660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.891685 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.891704 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.891743 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbf8j\" (UniqueName: \"kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.897322 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.898140 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.900086 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.903276 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993160 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbf8j\" (UniqueName: \"kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993327 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t4s\" (UniqueName: \"kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993380 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993609 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993758 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.993873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.994988 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.995140 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:39 crc kubenswrapper[4740]: I0130 16:01:39.999387 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.006162 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.017567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbf8j\" (UniqueName: \"kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j\") pod \"controller-manager-755968fc98-zgl5f\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.028459 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" event={"ID":"9974f6a5-4efc-434c-a308-cd2c28d7cc32","Type":"ContainerDied","Data":"a10eb6b2e1a4f58fe011373085992811aa2164dae6c79c333c48e29f7e0d201f"} Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.028490 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.028543 4740 scope.go:117] "RemoveContainer" containerID="79dedb5be9dfbb6f5df8b09d48819283fe61ac285a53a9a90b913d3100754a40" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.030640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" event={"ID":"902ddbe5-8668-45bd-a6f4-2466b604e867","Type":"ContainerDied","Data":"6e8e4ed655733936e9965e6d88e0604c2c32cde2d46028414bf551c687e07c1c"} Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.031124 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.058213 4740 scope.go:117] "RemoveContainer" containerID="b193eafeaf6b221ba92b2b05302d04b98a881f123fdf97c97da42d18bc38bf36" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.070316 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.074687 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bdff68fbf-h2bzn"] Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.082528 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.096006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.096075 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.096114 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84t4s\" (UniqueName: \"kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.096197 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.097091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.097746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.098608 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b9bb5cb5d-m2kqh"] Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.108266 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.113208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84t4s\" (UniqueName: \"kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s\") pod \"route-controller-manager-854dccd4f-bkpcf\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.202569 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.216692 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.444188 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:01:40 crc kubenswrapper[4740]: I0130 16:01:40.489673 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:01:40 crc kubenswrapper[4740]: W0130 16:01:40.492473 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca08f350_6e70_4d30_a6eb_58015d0544d8.slice/crio-202b67b57fbcad7232324b8e1bbafdda860300c3f94bf219946cc3eb4c800a9d WatchSource:0}: Error finding container 202b67b57fbcad7232324b8e1bbafdda860300c3f94bf219946cc3eb4c800a9d: Status 404 returned error can't find the container with id 202b67b57fbcad7232324b8e1bbafdda860300c3f94bf219946cc3eb4c800a9d Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.037657 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" event={"ID":"a710d157-cd23-457a-882d-d0ede9278f94","Type":"ContainerStarted","Data":"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565"} Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.038344 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.038393 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" event={"ID":"a710d157-cd23-457a-882d-d0ede9278f94","Type":"ContainerStarted","Data":"5cbe3d0cf16ca31cf7081289309f5c6ff679f85b857d2e0249623cd204fd613a"} Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.039954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" event={"ID":"ca08f350-6e70-4d30-a6eb-58015d0544d8","Type":"ContainerStarted","Data":"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6"} Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.039998 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" event={"ID":"ca08f350-6e70-4d30-a6eb-58015d0544d8","Type":"ContainerStarted","Data":"202b67b57fbcad7232324b8e1bbafdda860300c3f94bf219946cc3eb4c800a9d"} Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.040536 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.051523 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.055290 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.068238 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" podStartSLOduration=3.068214924 podStartE2EDuration="3.068214924s" podCreationTimestamp="2026-01-30 16:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:01:41.063362581 +0000 UTC m=+349.700425180" watchObservedRunningTime="2026-01-30 16:01:41.068214924 +0000 UTC m=+349.705277523" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.091873 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" podStartSLOduration=3.091826271 podStartE2EDuration="3.091826271s" podCreationTimestamp="2026-01-30 16:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:01:41.090122548 +0000 UTC m=+349.727185147" watchObservedRunningTime="2026-01-30 16:01:41.091826271 +0000 UTC m=+349.728888880" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.242235 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.342499 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902ddbe5-8668-45bd-a6f4-2466b604e867" path="/var/lib/kubelet/pods/902ddbe5-8668-45bd-a6f4-2466b604e867/volumes" Jan 30 16:01:41 crc kubenswrapper[4740]: I0130 16:01:41.343014 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9974f6a5-4efc-434c-a308-cd2c28d7cc32" path="/var/lib/kubelet/pods/9974f6a5-4efc-434c-a308-cd2c28d7cc32/volumes" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.192492 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.193538 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" podUID="ca08f350-6e70-4d30-a6eb-58015d0544d8" containerName="controller-manager" containerID="cri-o://b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6" gracePeriod=30 Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.281523 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.282285 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" podUID="a710d157-cd23-457a-882d-d0ede9278f94" containerName="route-controller-manager" containerID="cri-o://4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565" gracePeriod=30 Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.755813 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.804100 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca\") pod \"a710d157-cd23-457a-882d-d0ede9278f94\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.804183 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84t4s\" (UniqueName: \"kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s\") pod \"a710d157-cd23-457a-882d-d0ede9278f94\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.804229 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert\") pod \"a710d157-cd23-457a-882d-d0ede9278f94\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.804251 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config\") pod \"a710d157-cd23-457a-882d-d0ede9278f94\" (UID: \"a710d157-cd23-457a-882d-d0ede9278f94\") " Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.805548 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config" (OuterVolumeSpecName: "config") pod "a710d157-cd23-457a-882d-d0ede9278f94" (UID: "a710d157-cd23-457a-882d-d0ede9278f94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.805653 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca" (OuterVolumeSpecName: "client-ca") pod "a710d157-cd23-457a-882d-d0ede9278f94" (UID: "a710d157-cd23-457a-882d-d0ede9278f94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.812800 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a710d157-cd23-457a-882d-d0ede9278f94" (UID: "a710d157-cd23-457a-882d-d0ede9278f94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.812914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s" (OuterVolumeSpecName: "kube-api-access-84t4s") pod "a710d157-cd23-457a-882d-d0ede9278f94" (UID: "a710d157-cd23-457a-882d-d0ede9278f94"). InnerVolumeSpecName "kube-api-access-84t4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.817585 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.905313 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a710d157-cd23-457a-882d-d0ede9278f94-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.905379 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.905392 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a710d157-cd23-457a-882d-d0ede9278f94-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:04 crc kubenswrapper[4740]: I0130 16:02:04.905408 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84t4s\" (UniqueName: \"kubernetes.io/projected/a710d157-cd23-457a-882d-d0ede9278f94-kube-api-access-84t4s\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.006470 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config\") pod \"ca08f350-6e70-4d30-a6eb-58015d0544d8\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.006528 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert\") pod \"ca08f350-6e70-4d30-a6eb-58015d0544d8\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.006563 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbf8j\" (UniqueName: \"kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j\") pod \"ca08f350-6e70-4d30-a6eb-58015d0544d8\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.006657 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles\") pod \"ca08f350-6e70-4d30-a6eb-58015d0544d8\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.006754 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca\") pod \"ca08f350-6e70-4d30-a6eb-58015d0544d8\" (UID: \"ca08f350-6e70-4d30-a6eb-58015d0544d8\") " Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.008052 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca08f350-6e70-4d30-a6eb-58015d0544d8" (UID: "ca08f350-6e70-4d30-a6eb-58015d0544d8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.008298 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config" (OuterVolumeSpecName: "config") pod "ca08f350-6e70-4d30-a6eb-58015d0544d8" (UID: "ca08f350-6e70-4d30-a6eb-58015d0544d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.008538 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ca08f350-6e70-4d30-a6eb-58015d0544d8" (UID: "ca08f350-6e70-4d30-a6eb-58015d0544d8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.011707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca08f350-6e70-4d30-a6eb-58015d0544d8" (UID: "ca08f350-6e70-4d30-a6eb-58015d0544d8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.011834 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j" (OuterVolumeSpecName: "kube-api-access-vbf8j") pod "ca08f350-6e70-4d30-a6eb-58015d0544d8" (UID: "ca08f350-6e70-4d30-a6eb-58015d0544d8"). InnerVolumeSpecName "kube-api-access-vbf8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.109067 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.109161 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.109191 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca08f350-6e70-4d30-a6eb-58015d0544d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.109219 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca08f350-6e70-4d30-a6eb-58015d0544d8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.109244 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbf8j\" (UniqueName: \"kubernetes.io/projected/ca08f350-6e70-4d30-a6eb-58015d0544d8-kube-api-access-vbf8j\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.257523 4740 generic.go:334] "Generic (PLEG): container finished" podID="a710d157-cd23-457a-882d-d0ede9278f94" containerID="4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565" exitCode=0 Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.257587 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" event={"ID":"a710d157-cd23-457a-882d-d0ede9278f94","Type":"ContainerDied","Data":"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565"} Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.257627 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.259493 4740 scope.go:117] "RemoveContainer" containerID="4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.259391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf" event={"ID":"a710d157-cd23-457a-882d-d0ede9278f94","Type":"ContainerDied","Data":"5cbe3d0cf16ca31cf7081289309f5c6ff679f85b857d2e0249623cd204fd613a"} Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.261861 4740 generic.go:334] "Generic (PLEG): container finished" podID="ca08f350-6e70-4d30-a6eb-58015d0544d8" containerID="b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6" exitCode=0 Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.262048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" event={"ID":"ca08f350-6e70-4d30-a6eb-58015d0544d8","Type":"ContainerDied","Data":"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6"} Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.262192 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" event={"ID":"ca08f350-6e70-4d30-a6eb-58015d0544d8","Type":"ContainerDied","Data":"202b67b57fbcad7232324b8e1bbafdda860300c3f94bf219946cc3eb4c800a9d"} Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.262379 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-zgl5f" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.291055 4740 scope.go:117] "RemoveContainer" containerID="4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565" Jan 30 16:02:05 crc kubenswrapper[4740]: E0130 16:02:05.295032 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565\": container with ID starting with 4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565 not found: ID does not exist" containerID="4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.295195 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565"} err="failed to get container status \"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565\": rpc error: code = NotFound desc = could not find container \"4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565\": container with ID starting with 4c4d395a17b7bb1086c47d8bb8b481bd694d520e67bf2cbdb2ebf3393ef37565 not found: ID does not exist" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.295323 4740 scope.go:117] "RemoveContainer" containerID="b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.306558 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.312280 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bkpcf"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.316810 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.321334 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-zgl5f"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.325886 4740 scope.go:117] "RemoveContainer" containerID="b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6" Jan 30 16:02:05 crc kubenswrapper[4740]: E0130 16:02:05.326934 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6\": container with ID starting with b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6 not found: ID does not exist" containerID="b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.326994 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6"} err="failed to get container status \"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6\": rpc error: code = NotFound desc = could not find container \"b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6\": container with ID starting with b707db509b0870923618f33ca8883a80da5b581458a2ea6928c110f34a5c2cd6 not found: ID does not exist" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.345273 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a710d157-cd23-457a-882d-d0ede9278f94" path="/var/lib/kubelet/pods/a710d157-cd23-457a-882d-d0ede9278f94/volumes" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.346600 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca08f350-6e70-4d30-a6eb-58015d0544d8" path="/var/lib/kubelet/pods/ca08f350-6e70-4d30-a6eb-58015d0544d8/volumes" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.903816 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:05 crc kubenswrapper[4740]: E0130 16:02:05.904391 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca08f350-6e70-4d30-a6eb-58015d0544d8" containerName="controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.904417 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca08f350-6e70-4d30-a6eb-58015d0544d8" containerName="controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: E0130 16:02:05.904450 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a710d157-cd23-457a-882d-d0ede9278f94" containerName="route-controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.904465 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a710d157-cd23-457a-882d-d0ede9278f94" containerName="route-controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.904684 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca08f350-6e70-4d30-a6eb-58015d0544d8" containerName="controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.904715 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a710d157-cd23-457a-882d-d0ede9278f94" containerName="route-controller-manager" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.905546 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.910151 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.910540 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.910572 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.913561 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.914136 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.914900 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.925827 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.926282 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.929443 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.932475 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.933243 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.933598 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.933905 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.934202 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.934817 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.936562 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:02:05 crc kubenswrapper[4740]: I0130 16:02:05.942629 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.023534 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.024138 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.024167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h26j7\" (UniqueName: \"kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.024190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.024236 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125285 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125389 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125420 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125441 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h26j7\" (UniqueName: \"kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125460 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjvl\" (UniqueName: \"kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125526 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.125593 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.127138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.127438 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.128301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.132737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.148191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h26j7\" (UniqueName: \"kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7\") pod \"controller-manager-6f9f9b4b6c-h9dx6\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.226379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.226450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.226485 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.226509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjvl\" (UniqueName: \"kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.227703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.228662 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.231845 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.245342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjvl\" (UniqueName: \"kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl\") pod \"route-controller-manager-5f69494987-cs7kp\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.284442 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.293878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.654775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:06 crc kubenswrapper[4740]: I0130 16:02:06.791315 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:06 crc kubenswrapper[4740]: W0130 16:02:06.800477 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf76f76b4_2e35_4ee6_8afc_174140782e65.slice/crio-8d5bdd6f37206bca8ae3635f2665f1ec743b9d0931a94e86565c6fd8ef538436 WatchSource:0}: Error finding container 8d5bdd6f37206bca8ae3635f2665f1ec743b9d0931a94e86565c6fd8ef538436: Status 404 returned error can't find the container with id 8d5bdd6f37206bca8ae3635f2665f1ec743b9d0931a94e86565c6fd8ef538436 Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.279482 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" event={"ID":"5ebb661f-b583-4489-a1c1-4408f11c5715","Type":"ContainerStarted","Data":"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61"} Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.280101 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.280126 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" event={"ID":"5ebb661f-b583-4489-a1c1-4408f11c5715","Type":"ContainerStarted","Data":"9c659f746524384ad31c51589842f1c1a0364450dea04a602983ebf27b61b62b"} Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.282065 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" event={"ID":"f76f76b4-2e35-4ee6-8afc-174140782e65","Type":"ContainerStarted","Data":"196b6ce62371b2165e7b91920731fdc0e32d8050a1d36cff8574946b64fea2ba"} Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.282123 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" event={"ID":"f76f76b4-2e35-4ee6-8afc-174140782e65","Type":"ContainerStarted","Data":"8d5bdd6f37206bca8ae3635f2665f1ec743b9d0931a94e86565c6fd8ef538436"} Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.282406 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.290922 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.301549 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" podStartSLOduration=3.301529765 podStartE2EDuration="3.301529765s" podCreationTimestamp="2026-01-30 16:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:07.300326375 +0000 UTC m=+375.937388984" watchObservedRunningTime="2026-01-30 16:02:07.301529765 +0000 UTC m=+375.938592364" Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.323564 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" podStartSLOduration=3.323544313 podStartE2EDuration="3.323544313s" podCreationTimestamp="2026-01-30 16:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:07.323073291 +0000 UTC m=+375.960135910" watchObservedRunningTime="2026-01-30 16:02:07.323544313 +0000 UTC m=+375.960606912" Jan 30 16:02:07 crc kubenswrapper[4740]: I0130 16:02:07.545282 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.161510 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.163011 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" podUID="f76f76b4-2e35-4ee6-8afc-174140782e65" containerName="route-controller-manager" containerID="cri-o://196b6ce62371b2165e7b91920731fdc0e32d8050a1d36cff8574946b64fea2ba" gracePeriod=30 Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.363791 4740 generic.go:334] "Generic (PLEG): container finished" podID="f76f76b4-2e35-4ee6-8afc-174140782e65" containerID="196b6ce62371b2165e7b91920731fdc0e32d8050a1d36cff8574946b64fea2ba" exitCode=0 Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.363893 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" event={"ID":"f76f76b4-2e35-4ee6-8afc-174140782e65","Type":"ContainerDied","Data":"196b6ce62371b2165e7b91920731fdc0e32d8050a1d36cff8574946b64fea2ba"} Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.393176 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.393522 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vll9v" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="registry-server" containerID="cri-o://dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670" gracePeriod=2 Jan 30 16:02:19 crc kubenswrapper[4740]: I0130 16:02:19.875827 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.006604 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.046554 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca\") pod \"f76f76b4-2e35-4ee6-8afc-174140782e65\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.046688 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config\") pod \"f76f76b4-2e35-4ee6-8afc-174140782e65\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.047604 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqjvl\" (UniqueName: \"kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl\") pod \"f76f76b4-2e35-4ee6-8afc-174140782e65\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.047693 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert\") pod \"f76f76b4-2e35-4ee6-8afc-174140782e65\" (UID: \"f76f76b4-2e35-4ee6-8afc-174140782e65\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.048062 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config" (OuterVolumeSpecName: "config") pod "f76f76b4-2e35-4ee6-8afc-174140782e65" (UID: "f76f76b4-2e35-4ee6-8afc-174140782e65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.048169 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.048708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca" (OuterVolumeSpecName: "client-ca") pod "f76f76b4-2e35-4ee6-8afc-174140782e65" (UID: "f76f76b4-2e35-4ee6-8afc-174140782e65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.054861 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f76f76b4-2e35-4ee6-8afc-174140782e65" (UID: "f76f76b4-2e35-4ee6-8afc-174140782e65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.055394 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl" (OuterVolumeSpecName: "kube-api-access-tqjvl") pod "f76f76b4-2e35-4ee6-8afc-174140782e65" (UID: "f76f76b4-2e35-4ee6-8afc-174140782e65"). InnerVolumeSpecName "kube-api-access-tqjvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.149521 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities\") pod \"78371818-b6fe-4aff-aa1a-95d25333ccb6\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.149631 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content\") pod \"78371818-b6fe-4aff-aa1a-95d25333ccb6\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.149718 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxvm\" (UniqueName: \"kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm\") pod \"78371818-b6fe-4aff-aa1a-95d25333ccb6\" (UID: \"78371818-b6fe-4aff-aa1a-95d25333ccb6\") " Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.150075 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f76f76b4-2e35-4ee6-8afc-174140782e65-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.150100 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f76f76b4-2e35-4ee6-8afc-174140782e65-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.150115 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqjvl\" (UniqueName: \"kubernetes.io/projected/f76f76b4-2e35-4ee6-8afc-174140782e65-kube-api-access-tqjvl\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.151810 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities" (OuterVolumeSpecName: "utilities") pod "78371818-b6fe-4aff-aa1a-95d25333ccb6" (UID: "78371818-b6fe-4aff-aa1a-95d25333ccb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.153013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm" (OuterVolumeSpecName: "kube-api-access-rnxvm") pod "78371818-b6fe-4aff-aa1a-95d25333ccb6" (UID: "78371818-b6fe-4aff-aa1a-95d25333ccb6"). InnerVolumeSpecName "kube-api-access-rnxvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.252580 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.252674 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxvm\" (UniqueName: \"kubernetes.io/projected/78371818-b6fe-4aff-aa1a-95d25333ccb6-kube-api-access-rnxvm\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.330136 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78371818-b6fe-4aff-aa1a-95d25333ccb6" (UID: "78371818-b6fe-4aff-aa1a-95d25333ccb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.353477 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78371818-b6fe-4aff-aa1a-95d25333ccb6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.372926 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.372955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp" event={"ID":"f76f76b4-2e35-4ee6-8afc-174140782e65","Type":"ContainerDied","Data":"8d5bdd6f37206bca8ae3635f2665f1ec743b9d0931a94e86565c6fd8ef538436"} Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.373010 4740 scope.go:117] "RemoveContainer" containerID="196b6ce62371b2165e7b91920731fdc0e32d8050a1d36cff8574946b64fea2ba" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.379269 4740 generic.go:334] "Generic (PLEG): container finished" podID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerID="dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670" exitCode=0 Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.379332 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerDied","Data":"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670"} Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.379414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vll9v" event={"ID":"78371818-b6fe-4aff-aa1a-95d25333ccb6","Type":"ContainerDied","Data":"9b7286e81db3f2b6f2ae2f047f6c39127e2ff414ef06e23f582a905697de2a52"} Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.379579 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vll9v" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.399487 4740 scope.go:117] "RemoveContainer" containerID="dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.418460 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.425523 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f69494987-cs7kp"] Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.445695 4740 scope.go:117] "RemoveContainer" containerID="dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.449205 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.455012 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vll9v"] Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.488680 4740 scope.go:117] "RemoveContainer" containerID="0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.516581 4740 scope.go:117] "RemoveContainer" containerID="dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.517178 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670\": container with ID starting with dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670 not found: ID does not exist" containerID="dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.517240 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670"} err="failed to get container status \"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670\": rpc error: code = NotFound desc = could not find container \"dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670\": container with ID starting with dc1d907cb86be0bd01dad333f683870606906e458a6b5125a70a4cfd79157670 not found: ID does not exist" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.517285 4740 scope.go:117] "RemoveContainer" containerID="dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.517714 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e\": container with ID starting with dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e not found: ID does not exist" containerID="dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.517808 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e"} err="failed to get container status \"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e\": rpc error: code = NotFound desc = could not find container \"dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e\": container with ID starting with dbbaa98e44412a829c7828adf9faebcbcb655f22cc3509cbf940c06c1bf7fd2e not found: ID does not exist" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.517874 4740 scope.go:117] "RemoveContainer" containerID="0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.518278 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2\": container with ID starting with 0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2 not found: ID does not exist" containerID="0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.518321 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2"} err="failed to get container status \"0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2\": rpc error: code = NotFound desc = could not find container \"0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2\": container with ID starting with 0e7c9057bdefabe1f74b7dba0aecb3cc005853331c780ff20026924ab3e00da2 not found: ID does not exist" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909120 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5"] Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.909465 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="extract-content" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909483 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="extract-content" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.909494 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76f76b4-2e35-4ee6-8afc-174140782e65" containerName="route-controller-manager" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909500 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76f76b4-2e35-4ee6-8afc-174140782e65" containerName="route-controller-manager" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.909510 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="registry-server" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909518 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="registry-server" Jan 30 16:02:20 crc kubenswrapper[4740]: E0130 16:02:20.909530 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="extract-utilities" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909536 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="extract-utilities" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909630 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" containerName="registry-server" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.909644 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76f76b4-2e35-4ee6-8afc-174140782e65" containerName="route-controller-manager" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.910082 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.913762 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.914094 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.914399 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.914442 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.915258 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.915394 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:02:20 crc kubenswrapper[4740]: I0130 16:02:20.929914 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5"] Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.064574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a20d5930-686c-4a87-b749-3c3f25f538d5-serving-cert\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.064865 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-client-ca\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.064976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb77m\" (UniqueName: \"kubernetes.io/projected/a20d5930-686c-4a87-b749-3c3f25f538d5-kube-api-access-hb77m\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.065017 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-config\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.166061 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-client-ca\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.166152 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb77m\" (UniqueName: \"kubernetes.io/projected/a20d5930-686c-4a87-b749-3c3f25f538d5-kube-api-access-hb77m\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.166305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-config\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.166472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a20d5930-686c-4a87-b749-3c3f25f538d5-serving-cert\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.168190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-client-ca\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.168703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a20d5930-686c-4a87-b749-3c3f25f538d5-config\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.171739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a20d5930-686c-4a87-b749-3c3f25f538d5-serving-cert\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.197291 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb77m\" (UniqueName: \"kubernetes.io/projected/a20d5930-686c-4a87-b749-3c3f25f538d5-kube-api-access-hb77m\") pod \"route-controller-manager-854dccd4f-bxdx5\" (UID: \"a20d5930-686c-4a87-b749-3c3f25f538d5\") " pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.272321 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.343323 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78371818-b6fe-4aff-aa1a-95d25333ccb6" path="/var/lib/kubelet/pods/78371818-b6fe-4aff-aa1a-95d25333ccb6/volumes" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.344602 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76f76b4-2e35-4ee6-8afc-174140782e65" path="/var/lib/kubelet/pods/f76f76b4-2e35-4ee6-8afc-174140782e65/volumes" Jan 30 16:02:21 crc kubenswrapper[4740]: I0130 16:02:21.839617 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5"] Jan 30 16:02:21 crc kubenswrapper[4740]: W0130 16:02:21.848188 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda20d5930_686c_4a87_b749_3c3f25f538d5.slice/crio-1ca1741f259541c60ac637b7d3193d8a396ea77505f8dadaff9508dc384d5bdd WatchSource:0}: Error finding container 1ca1741f259541c60ac637b7d3193d8a396ea77505f8dadaff9508dc384d5bdd: Status 404 returned error can't find the container with id 1ca1741f259541c60ac637b7d3193d8a396ea77505f8dadaff9508dc384d5bdd Jan 30 16:02:22 crc kubenswrapper[4740]: I0130 16:02:22.423288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" event={"ID":"a20d5930-686c-4a87-b749-3c3f25f538d5","Type":"ContainerStarted","Data":"643d56fa60d661ae1630f9fb6ad8b986d79de925bcb3a470c1158c8538f0f422"} Jan 30 16:02:22 crc kubenswrapper[4740]: I0130 16:02:22.423383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" event={"ID":"a20d5930-686c-4a87-b749-3c3f25f538d5","Type":"ContainerStarted","Data":"1ca1741f259541c60ac637b7d3193d8a396ea77505f8dadaff9508dc384d5bdd"} Jan 30 16:02:22 crc kubenswrapper[4740]: I0130 16:02:22.425261 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:22 crc kubenswrapper[4740]: I0130 16:02:22.446566 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" podStartSLOduration=3.446537215 podStartE2EDuration="3.446537215s" podCreationTimestamp="2026-01-30 16:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:22.444206705 +0000 UTC m=+391.081269344" watchObservedRunningTime="2026-01-30 16:02:22.446537215 +0000 UTC m=+391.083599854" Jan 30 16:02:22 crc kubenswrapper[4740]: I0130 16:02:22.620979 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-854dccd4f-bxdx5" Jan 30 16:02:24 crc kubenswrapper[4740]: I0130 16:02:24.455072 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:02:24 crc kubenswrapper[4740]: I0130 16:02:24.455191 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:02:24 crc kubenswrapper[4740]: I0130 16:02:24.917302 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qlgnh"] Jan 30 16:02:24 crc kubenswrapper[4740]: I0130 16:02:24.918072 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:24 crc kubenswrapper[4740]: I0130 16:02:24.938511 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qlgnh"] Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-bound-sa-token\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032467 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlqkw\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-kube-api-access-nlqkw\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032583 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-registry-tls\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032608 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-registry-certificates\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c487894-5114-415b-bf90-e3b7124d619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032663 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c487894-5114-415b-bf90-e3b7124d619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.032802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-trusted-ca\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.056101 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-registry-tls\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-registry-certificates\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c487894-5114-415b-bf90-e3b7124d619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134194 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c487894-5114-415b-bf90-e3b7124d619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134228 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-trusted-ca\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134262 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-bound-sa-token\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134286 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlqkw\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-kube-api-access-nlqkw\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.134854 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3c487894-5114-415b-bf90-e3b7124d619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.136264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-registry-certificates\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.136980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3c487894-5114-415b-bf90-e3b7124d619d-trusted-ca\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.143225 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-registry-tls\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.143457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3c487894-5114-415b-bf90-e3b7124d619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.150402 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-bound-sa-token\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.151130 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlqkw\" (UniqueName: \"kubernetes.io/projected/3c487894-5114-415b-bf90-e3b7124d619d-kube-api-access-nlqkw\") pod \"image-registry-66df7c8f76-qlgnh\" (UID: \"3c487894-5114-415b-bf90-e3b7124d619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.241069 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:25 crc kubenswrapper[4740]: I0130 16:02:25.752187 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qlgnh"] Jan 30 16:02:25 crc kubenswrapper[4740]: W0130 16:02:25.760316 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c487894_5114_415b_bf90_e3b7124d619d.slice/crio-d211d7c5de81479db3534acf0ef032e6593c5140340c9ca89ff783209c10bdba WatchSource:0}: Error finding container d211d7c5de81479db3534acf0ef032e6593c5140340c9ca89ff783209c10bdba: Status 404 returned error can't find the container with id d211d7c5de81479db3534acf0ef032e6593c5140340c9ca89ff783209c10bdba Jan 30 16:02:26 crc kubenswrapper[4740]: I0130 16:02:26.454507 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" event={"ID":"3c487894-5114-415b-bf90-e3b7124d619d","Type":"ContainerStarted","Data":"07f97bef95af978e3167b2a5419bfc489d1ce42872664d19a8a42dc79ff53d86"} Jan 30 16:02:26 crc kubenswrapper[4740]: I0130 16:02:26.455140 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:26 crc kubenswrapper[4740]: I0130 16:02:26.455157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" event={"ID":"3c487894-5114-415b-bf90-e3b7124d619d","Type":"ContainerStarted","Data":"d211d7c5de81479db3534acf0ef032e6593c5140340c9ca89ff783209c10bdba"} Jan 30 16:02:26 crc kubenswrapper[4740]: I0130 16:02:26.492690 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" podStartSLOduration=2.492656968 podStartE2EDuration="2.492656968s" podCreationTimestamp="2026-01-30 16:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:26.485216328 +0000 UTC m=+395.122278967" watchObservedRunningTime="2026-01-30 16:02:26.492656968 +0000 UTC m=+395.129719607" Jan 30 16:02:38 crc kubenswrapper[4740]: I0130 16:02:38.678007 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:38 crc kubenswrapper[4740]: I0130 16:02:38.679120 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" podUID="5ebb661f-b583-4489-a1c1-4408f11c5715" containerName="controller-manager" containerID="cri-o://c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61" gracePeriod=30 Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.281042 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.411110 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert\") pod \"5ebb661f-b583-4489-a1c1-4408f11c5715\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.411248 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config\") pod \"5ebb661f-b583-4489-a1c1-4408f11c5715\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.411332 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h26j7\" (UniqueName: \"kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7\") pod \"5ebb661f-b583-4489-a1c1-4408f11c5715\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.411382 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles\") pod \"5ebb661f-b583-4489-a1c1-4408f11c5715\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.411408 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca\") pod \"5ebb661f-b583-4489-a1c1-4408f11c5715\" (UID: \"5ebb661f-b583-4489-a1c1-4408f11c5715\") " Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.412715 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ebb661f-b583-4489-a1c1-4408f11c5715" (UID: "5ebb661f-b583-4489-a1c1-4408f11c5715"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.412746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config" (OuterVolumeSpecName: "config") pod "5ebb661f-b583-4489-a1c1-4408f11c5715" (UID: "5ebb661f-b583-4489-a1c1-4408f11c5715"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.412742 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ebb661f-b583-4489-a1c1-4408f11c5715" (UID: "5ebb661f-b583-4489-a1c1-4408f11c5715"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.418238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7" (OuterVolumeSpecName: "kube-api-access-h26j7") pod "5ebb661f-b583-4489-a1c1-4408f11c5715" (UID: "5ebb661f-b583-4489-a1c1-4408f11c5715"). InnerVolumeSpecName "kube-api-access-h26j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.418517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ebb661f-b583-4489-a1c1-4408f11c5715" (UID: "5ebb661f-b583-4489-a1c1-4408f11c5715"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.513330 4740 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebb661f-b583-4489-a1c1-4408f11c5715-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.513437 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.513452 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h26j7\" (UniqueName: \"kubernetes.io/projected/5ebb661f-b583-4489-a1c1-4408f11c5715-kube-api-access-h26j7\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.513466 4740 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.513480 4740 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebb661f-b583-4489-a1c1-4408f11c5715-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.568144 4740 generic.go:334] "Generic (PLEG): container finished" podID="5ebb661f-b583-4489-a1c1-4408f11c5715" containerID="c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61" exitCode=0 Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.568208 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" event={"ID":"5ebb661f-b583-4489-a1c1-4408f11c5715","Type":"ContainerDied","Data":"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61"} Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.568251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" event={"ID":"5ebb661f-b583-4489-a1c1-4408f11c5715","Type":"ContainerDied","Data":"9c659f746524384ad31c51589842f1c1a0364450dea04a602983ebf27b61b62b"} Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.568278 4740 scope.go:117] "RemoveContainer" containerID="c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.568495 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.598157 4740 scope.go:117] "RemoveContainer" containerID="c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61" Jan 30 16:02:39 crc kubenswrapper[4740]: E0130 16:02:39.608430 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61\": container with ID starting with c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61 not found: ID does not exist" containerID="c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.608502 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61"} err="failed to get container status \"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61\": rpc error: code = NotFound desc = could not find container \"c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61\": container with ID starting with c5b6f192332a32b82f7c818413679943bd13055bc41c9b4648cfa71e066b4f61 not found: ID does not exist" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.624210 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.643504 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f9f9b4b6c-h9dx6"] Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.925075 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-69l7j"] Jan 30 16:02:39 crc kubenswrapper[4740]: E0130 16:02:39.925833 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebb661f-b583-4489-a1c1-4408f11c5715" containerName="controller-manager" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.925856 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebb661f-b583-4489-a1c1-4408f11c5715" containerName="controller-manager" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.926048 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebb661f-b583-4489-a1c1-4408f11c5715" containerName="controller-manager" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.926778 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.929652 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.937057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.938611 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.939942 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.939989 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.942226 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.957391 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:02:39 crc kubenswrapper[4740]: I0130 16:02:39.972531 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-69l7j"] Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.021600 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-client-ca\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.021673 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-config\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.021726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db81f525-7fff-452b-b707-93bfd105e742-serving-cert\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.021918 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-proxy-ca-bundles\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.022020 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57k9h\" (UniqueName: \"kubernetes.io/projected/db81f525-7fff-452b-b707-93bfd105e742-kube-api-access-57k9h\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.123554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-client-ca\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.123642 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-config\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.123676 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db81f525-7fff-452b-b707-93bfd105e742-serving-cert\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.123735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-proxy-ca-bundles\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.123774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57k9h\" (UniqueName: \"kubernetes.io/projected/db81f525-7fff-452b-b707-93bfd105e742-kube-api-access-57k9h\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.125030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-client-ca\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.125545 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-config\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.125801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db81f525-7fff-452b-b707-93bfd105e742-proxy-ca-bundles\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.129916 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db81f525-7fff-452b-b707-93bfd105e742-serving-cert\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.147238 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57k9h\" (UniqueName: \"kubernetes.io/projected/db81f525-7fff-452b-b707-93bfd105e742-kube-api-access-57k9h\") pod \"controller-manager-755968fc98-69l7j\" (UID: \"db81f525-7fff-452b-b707-93bfd105e742\") " pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.276028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:40 crc kubenswrapper[4740]: I0130 16:02:40.716069 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-755968fc98-69l7j"] Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.342927 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebb661f-b583-4489-a1c1-4408f11c5715" path="/var/lib/kubelet/pods/5ebb661f-b583-4489-a1c1-4408f11c5715/volumes" Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.587364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" event={"ID":"db81f525-7fff-452b-b707-93bfd105e742","Type":"ContainerStarted","Data":"d661af3f5024d77ca7e371e303e2daa205ded7da32a81a762d8f8af9788cd4fa"} Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.587436 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" event={"ID":"db81f525-7fff-452b-b707-93bfd105e742","Type":"ContainerStarted","Data":"6f6db3917f5d06b1bfc4b7912ff362bdca4982dcd466a6dcc3f7409de150cb3c"} Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.587755 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.593211 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" Jan 30 16:02:41 crc kubenswrapper[4740]: I0130 16:02:41.607327 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-755968fc98-69l7j" podStartSLOduration=3.607306936 podStartE2EDuration="3.607306936s" podCreationTimestamp="2026-01-30 16:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:41.606396703 +0000 UTC m=+410.243459312" watchObservedRunningTime="2026-01-30 16:02:41.607306936 +0000 UTC m=+410.244369545" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.345257 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.346392 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vsqch" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="registry-server" containerID="cri-o://0bab91095e39f89ad615d56fa28b0b4638904622fc95c92d71b4c33da580a93f" gracePeriod=30 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.352861 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.353365 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7xmh" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="registry-server" containerID="cri-o://be2aeded559d301bd9d43a9e42ff29f3ea88c2f167b97f009b370771c8ae4c7a" gracePeriod=30 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.370647 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.370974 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" containerID="cri-o://b94d1163766e043bebcc1491bde2135778585825bf8ee9a417d08d9e77b93d94" gracePeriod=30 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.386245 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.389691 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whb2w" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="registry-server" containerID="cri-o://943d7b9001e349eb51829d30dc2d34e6b6457a90e92d05f1ab4a2554cf46738b" gracePeriod=30 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.390891 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqx29"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.391852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.401930 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqx29"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.405841 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.406139 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kws9w" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="registry-server" containerID="cri-o://033cce39bc84e5606258e3c1e7bf92cb937ad8874ba44f615ad7454b218ef7d4" gracePeriod=30 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.489338 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n872t\" (UniqueName: \"kubernetes.io/projected/74ccccc6-dc66-4346-8d92-b38103ce5d69-kube-api-access-n872t\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.489415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.489510 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.590882 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n872t\" (UniqueName: \"kubernetes.io/projected/74ccccc6-dc66-4346-8d92-b38103ce5d69-kube-api-access-n872t\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.590934 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.590984 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.592621 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.611583 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74ccccc6-dc66-4346-8d92-b38103ce5d69-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.612892 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n872t\" (UniqueName: \"kubernetes.io/projected/74ccccc6-dc66-4346-8d92-b38103ce5d69-kube-api-access-n872t\") pod \"marketplace-operator-79b997595-gqx29\" (UID: \"74ccccc6-dc66-4346-8d92-b38103ce5d69\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.616984 4740 generic.go:334] "Generic (PLEG): container finished" podID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerID="0bab91095e39f89ad615d56fa28b0b4638904622fc95c92d71b4c33da580a93f" exitCode=0 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.617239 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerDied","Data":"0bab91095e39f89ad615d56fa28b0b4638904622fc95c92d71b4c33da580a93f"} Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.619149 4740 generic.go:334] "Generic (PLEG): container finished" podID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerID="b94d1163766e043bebcc1491bde2135778585825bf8ee9a417d08d9e77b93d94" exitCode=0 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.619247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" event={"ID":"75b41ccc-dc45-4c27-8b9e-99cdddb63824","Type":"ContainerDied","Data":"b94d1163766e043bebcc1491bde2135778585825bf8ee9a417d08d9e77b93d94"} Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.624362 4740 generic.go:334] "Generic (PLEG): container finished" podID="880ca711-7365-46af-b0dc-c0500d79f658" containerID="033cce39bc84e5606258e3c1e7bf92cb937ad8874ba44f615ad7454b218ef7d4" exitCode=0 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.624579 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerDied","Data":"033cce39bc84e5606258e3c1e7bf92cb937ad8874ba44f615ad7454b218ef7d4"} Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.627581 4740 generic.go:334] "Generic (PLEG): container finished" podID="b2316420-3b75-4623-a7ef-3ae90e376158" containerID="be2aeded559d301bd9d43a9e42ff29f3ea88c2f167b97f009b370771c8ae4c7a" exitCode=0 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.627653 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerDied","Data":"be2aeded559d301bd9d43a9e42ff29f3ea88c2f167b97f009b370771c8ae4c7a"} Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.651358 4740 generic.go:334] "Generic (PLEG): container finished" podID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerID="943d7b9001e349eb51829d30dc2d34e6b6457a90e92d05f1ab4a2554cf46738b" exitCode=0 Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.651668 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerDied","Data":"943d7b9001e349eb51829d30dc2d34e6b6457a90e92d05f1ab4a2554cf46738b"} Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.713545 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:44 crc kubenswrapper[4740]: I0130 16:02:44.890887 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.000552 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xltj\" (UniqueName: \"kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj\") pod \"d3a1319e-f522-47f8-91ad-71235f9e9f45\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.000618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities\") pod \"d3a1319e-f522-47f8-91ad-71235f9e9f45\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.000658 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content\") pod \"d3a1319e-f522-47f8-91ad-71235f9e9f45\" (UID: \"d3a1319e-f522-47f8-91ad-71235f9e9f45\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.001861 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities" (OuterVolumeSpecName: "utilities") pod "d3a1319e-f522-47f8-91ad-71235f9e9f45" (UID: "d3a1319e-f522-47f8-91ad-71235f9e9f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.037377 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj" (OuterVolumeSpecName: "kube-api-access-5xltj") pod "d3a1319e-f522-47f8-91ad-71235f9e9f45" (UID: "d3a1319e-f522-47f8-91ad-71235f9e9f45"). InnerVolumeSpecName "kube-api-access-5xltj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.049660 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3a1319e-f522-47f8-91ad-71235f9e9f45" (UID: "d3a1319e-f522-47f8-91ad-71235f9e9f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.090525 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.102391 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xltj\" (UniqueName: \"kubernetes.io/projected/d3a1319e-f522-47f8-91ad-71235f9e9f45-kube-api-access-5xltj\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.102430 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.102443 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3a1319e-f522-47f8-91ad-71235f9e9f45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.145841 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.175176 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.190270 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.203188 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca\") pod \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.203301 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics\") pod \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.203434 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nd2\" (UniqueName: \"kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2\") pod \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\" (UID: \"75b41ccc-dc45-4c27-8b9e-99cdddb63824\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.204600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "75b41ccc-dc45-4c27-8b9e-99cdddb63824" (UID: "75b41ccc-dc45-4c27-8b9e-99cdddb63824"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.208625 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2" (OuterVolumeSpecName: "kube-api-access-g2nd2") pod "75b41ccc-dc45-4c27-8b9e-99cdddb63824" (UID: "75b41ccc-dc45-4c27-8b9e-99cdddb63824"). InnerVolumeSpecName "kube-api-access-g2nd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.209431 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "75b41ccc-dc45-4c27-8b9e-99cdddb63824" (UID: "75b41ccc-dc45-4c27-8b9e-99cdddb63824"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.246164 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qlgnh" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.303882 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304571 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities\") pod \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304667 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content\") pod \"880ca711-7365-46af-b0dc-c0500d79f658\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content\") pod \"b2316420-3b75-4623-a7ef-3ae90e376158\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304797 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content\") pod \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304861 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbhmd\" (UniqueName: \"kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd\") pod \"b2316420-3b75-4623-a7ef-3ae90e376158\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304894 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities\") pod \"b2316420-3b75-4623-a7ef-3ae90e376158\" (UID: \"b2316420-3b75-4623-a7ef-3ae90e376158\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304938 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbs6v\" (UniqueName: \"kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v\") pod \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\" (UID: \"06656d2a-d0cf-48c5-b4f3-7780519a8bc2\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304967 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities\") pod \"880ca711-7365-46af-b0dc-c0500d79f658\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.304995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2wbr\" (UniqueName: \"kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr\") pod \"880ca711-7365-46af-b0dc-c0500d79f658\" (UID: \"880ca711-7365-46af-b0dc-c0500d79f658\") " Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.305304 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nd2\" (UniqueName: \"kubernetes.io/projected/75b41ccc-dc45-4c27-8b9e-99cdddb63824-kube-api-access-g2nd2\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.305327 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.305342 4740 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/75b41ccc-dc45-4c27-8b9e-99cdddb63824-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.309096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities" (OuterVolumeSpecName: "utilities") pod "b2316420-3b75-4623-a7ef-3ae90e376158" (UID: "b2316420-3b75-4623-a7ef-3ae90e376158"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.312042 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities" (OuterVolumeSpecName: "utilities") pod "880ca711-7365-46af-b0dc-c0500d79f658" (UID: "880ca711-7365-46af-b0dc-c0500d79f658"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.312996 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities" (OuterVolumeSpecName: "utilities") pod "06656d2a-d0cf-48c5-b4f3-7780519a8bc2" (UID: "06656d2a-d0cf-48c5-b4f3-7780519a8bc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.316165 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd" (OuterVolumeSpecName: "kube-api-access-kbhmd") pod "b2316420-3b75-4623-a7ef-3ae90e376158" (UID: "b2316420-3b75-4623-a7ef-3ae90e376158"). InnerVolumeSpecName "kube-api-access-kbhmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.316276 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr" (OuterVolumeSpecName: "kube-api-access-x2wbr") pod "880ca711-7365-46af-b0dc-c0500d79f658" (UID: "880ca711-7365-46af-b0dc-c0500d79f658"). InnerVolumeSpecName "kube-api-access-x2wbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.316426 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v" (OuterVolumeSpecName: "kube-api-access-cbs6v") pod "06656d2a-d0cf-48c5-b4f3-7780519a8bc2" (UID: "06656d2a-d0cf-48c5-b4f3-7780519a8bc2"). InnerVolumeSpecName "kube-api-access-cbs6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.370564 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06656d2a-d0cf-48c5-b4f3-7780519a8bc2" (UID: "06656d2a-d0cf-48c5-b4f3-7780519a8bc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.379212 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqx29"] Jan 30 16:02:45 crc kubenswrapper[4740]: W0130 16:02:45.379423 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74ccccc6_dc66_4346_8d92_b38103ce5d69.slice/crio-d054ceab492b45a2332861cb4e6f394a042b4c08d6a865bd4f9d754d3592febd WatchSource:0}: Error finding container d054ceab492b45a2332861cb4e6f394a042b4c08d6a865bd4f9d754d3592febd: Status 404 returned error can't find the container with id d054ceab492b45a2332861cb4e6f394a042b4c08d6a865bd4f9d754d3592febd Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.404700 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2316420-3b75-4623-a7ef-3ae90e376158" (UID: "b2316420-3b75-4623-a7ef-3ae90e376158"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407065 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407101 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407115 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbhmd\" (UniqueName: \"kubernetes.io/projected/b2316420-3b75-4623-a7ef-3ae90e376158-kube-api-access-kbhmd\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407129 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2316420-3b75-4623-a7ef-3ae90e376158-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407142 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbs6v\" (UniqueName: \"kubernetes.io/projected/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-kube-api-access-cbs6v\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407152 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407162 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2wbr\" (UniqueName: \"kubernetes.io/projected/880ca711-7365-46af-b0dc-c0500d79f658-kube-api-access-x2wbr\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.407171 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06656d2a-d0cf-48c5-b4f3-7780519a8bc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.486922 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "880ca711-7365-46af-b0dc-c0500d79f658" (UID: "880ca711-7365-46af-b0dc-c0500d79f658"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.508391 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880ca711-7365-46af-b0dc-c0500d79f658-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.662787 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whb2w" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.662792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whb2w" event={"ID":"06656d2a-d0cf-48c5-b4f3-7780519a8bc2","Type":"ContainerDied","Data":"a1f503029cb751324fa640a8abffceccdb8a0773b7a30730003155d010376b02"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.662891 4740 scope.go:117] "RemoveContainer" containerID="943d7b9001e349eb51829d30dc2d34e6b6457a90e92d05f1ab4a2554cf46738b" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.666824 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsqch" event={"ID":"d3a1319e-f522-47f8-91ad-71235f9e9f45","Type":"ContainerDied","Data":"e816f2b39984414d086b3007776d632e6a16bc24d0bc46f3bc1508f9514251d0"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.666943 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsqch" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.671821 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" event={"ID":"75b41ccc-dc45-4c27-8b9e-99cdddb63824","Type":"ContainerDied","Data":"23298484ce9def879ae3707917abcde98e1afd6cc9e1738f49fa1616954f4d10"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.672282 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjf5j" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.674433 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" event={"ID":"74ccccc6-dc66-4346-8d92-b38103ce5d69","Type":"ContainerStarted","Data":"8e3de8fa57ed8a0d558eb9dfe7d533b40d140e070cfd71b3cffeb6215b1e7c62"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.674480 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" event={"ID":"74ccccc6-dc66-4346-8d92-b38103ce5d69","Type":"ContainerStarted","Data":"d054ceab492b45a2332861cb4e6f394a042b4c08d6a865bd4f9d754d3592febd"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.674919 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.676145 4740 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gqx29 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.676190 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" podUID="74ccccc6-dc66-4346-8d92-b38103ce5d69" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.677864 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kws9w" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.677858 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kws9w" event={"ID":"880ca711-7365-46af-b0dc-c0500d79f658","Type":"ContainerDied","Data":"6d136224d7c6d7b967088763ca4b59e6040a803858fcc27f1342d0906c9e3a20"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.680051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7xmh" event={"ID":"b2316420-3b75-4623-a7ef-3ae90e376158","Type":"ContainerDied","Data":"b15750428119cf412a8611851b74116c0e2acb0a08c0b8222cb4549feb73fd7a"} Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.680197 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7xmh" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.696856 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.699750 4740 scope.go:117] "RemoveContainer" containerID="b2daf4354a19a9c27739e455c41d0e2d3af9176cadc357027d03bee9818ac465" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.702738 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vsqch"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.727958 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" podStartSLOduration=1.7279316759999999 podStartE2EDuration="1.727931676s" podCreationTimestamp="2026-01-30 16:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:02:45.726864388 +0000 UTC m=+414.363926997" watchObservedRunningTime="2026-01-30 16:02:45.727931676 +0000 UTC m=+414.364994275" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.748445 4740 scope.go:117] "RemoveContainer" containerID="417bb3f0791a11888623f76af5ca39ca4f4609234a5c31e18f23c889410b5ec6" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.758095 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.764802 4740 scope.go:117] "RemoveContainer" containerID="0bab91095e39f89ad615d56fa28b0b4638904622fc95c92d71b4c33da580a93f" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.772026 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjf5j"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.788810 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.793857 4740 scope.go:117] "RemoveContainer" containerID="046926360fae9327bb15aee50e34c0897098f225a3a2c4d68a115ea244d92835" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.798615 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whb2w"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.810529 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.815182 4740 scope.go:117] "RemoveContainer" containerID="0d9801c843d4a646a87298a154435b3bf737586ba54afc218b6c271b1e9fdc6a" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.821941 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7xmh"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.826318 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.830066 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kws9w"] Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.835988 4740 scope.go:117] "RemoveContainer" containerID="b94d1163766e043bebcc1491bde2135778585825bf8ee9a417d08d9e77b93d94" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.860930 4740 scope.go:117] "RemoveContainer" containerID="033cce39bc84e5606258e3c1e7bf92cb937ad8874ba44f615ad7454b218ef7d4" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.893785 4740 scope.go:117] "RemoveContainer" containerID="2bfdb7198a77d525724530864177d82fb37549eee99200a6bd439a266af8dbf1" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.917643 4740 scope.go:117] "RemoveContainer" containerID="36472cd3c126c9be87d73da22690e783b1fa5ffe8e930e8008294bd63b447be2" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.935586 4740 scope.go:117] "RemoveContainer" containerID="be2aeded559d301bd9d43a9e42ff29f3ea88c2f167b97f009b370771c8ae4c7a" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.957092 4740 scope.go:117] "RemoveContainer" containerID="d9e2ebd7cbcb46f06ccc2b23d41a9ed5c14b6501b6283e8ad4e0ddd37afc58b8" Jan 30 16:02:45 crc kubenswrapper[4740]: I0130 16:02:45.978107 4740 scope.go:117] "RemoveContainer" containerID="ee4facab844986ba99240bc020def4dbe6be0b3fcd9c7e6fbffb3f55746e7481" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566312 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5jvn"] Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566698 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566721 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566749 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566763 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566784 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566799 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566820 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566834 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566851 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566866 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566886 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566900 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566918 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566931 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.566948 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.566961 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.567021 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567036 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.567057 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567070 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="extract-content" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.567094 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567108 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.567127 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567140 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: E0130 16:02:46.567159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567173 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="extract-utilities" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567388 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" containerName="marketplace-operator" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567412 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="880ca711-7365-46af-b0dc-c0500d79f658" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567429 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567449 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.567473 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" containerName="registry-server" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.569151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.575049 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.587436 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jvn"] Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.623228 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-utilities\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.623299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-catalog-content\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.623775 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm67\" (UniqueName: \"kubernetes.io/projected/91bd209c-56bd-4aa3-b454-c05ef1b75167-kube-api-access-gpm67\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.705731 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gqx29" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.725999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpm67\" (UniqueName: \"kubernetes.io/projected/91bd209c-56bd-4aa3-b454-c05ef1b75167-kube-api-access-gpm67\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.726400 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-utilities\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.726505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-catalog-content\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.727674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-catalog-content\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.729478 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91bd209c-56bd-4aa3-b454-c05ef1b75167-utilities\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.750249 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpm67\" (UniqueName: \"kubernetes.io/projected/91bd209c-56bd-4aa3-b454-c05ef1b75167-kube-api-access-gpm67\") pod \"community-operators-z5jvn\" (UID: \"91bd209c-56bd-4aa3-b454-c05ef1b75167\") " pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.772844 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ccqbd"] Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.774312 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.776495 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.784338 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ccqbd"] Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.828257 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mp2\" (UniqueName: \"kubernetes.io/projected/16cbd370-42ef-4109-8c03-15de9af9df16-kube-api-access-m2mp2\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.828566 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-utilities\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.828940 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-catalog-content\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.898781 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.932487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-utilities\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.932574 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-catalog-content\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.932705 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mp2\" (UniqueName: \"kubernetes.io/projected/16cbd370-42ef-4109-8c03-15de9af9df16-kube-api-access-m2mp2\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.933127 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-utilities\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.933314 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cbd370-42ef-4109-8c03-15de9af9df16-catalog-content\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:46 crc kubenswrapper[4740]: I0130 16:02:46.955268 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mp2\" (UniqueName: \"kubernetes.io/projected/16cbd370-42ef-4109-8c03-15de9af9df16-kube-api-access-m2mp2\") pod \"certified-operators-ccqbd\" (UID: \"16cbd370-42ef-4109-8c03-15de9af9df16\") " pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.095640 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.348571 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06656d2a-d0cf-48c5-b4f3-7780519a8bc2" path="/var/lib/kubelet/pods/06656d2a-d0cf-48c5-b4f3-7780519a8bc2/volumes" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.349339 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b41ccc-dc45-4c27-8b9e-99cdddb63824" path="/var/lib/kubelet/pods/75b41ccc-dc45-4c27-8b9e-99cdddb63824/volumes" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.350165 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="880ca711-7365-46af-b0dc-c0500d79f658" path="/var/lib/kubelet/pods/880ca711-7365-46af-b0dc-c0500d79f658/volumes" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.351123 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2316420-3b75-4623-a7ef-3ae90e376158" path="/var/lib/kubelet/pods/b2316420-3b75-4623-a7ef-3ae90e376158/volumes" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.351840 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a1319e-f522-47f8-91ad-71235f9e9f45" path="/var/lib/kubelet/pods/d3a1319e-f522-47f8-91ad-71235f9e9f45/volumes" Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.352492 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jvn"] Jan 30 16:02:47 crc kubenswrapper[4740]: W0130 16:02:47.353679 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91bd209c_56bd_4aa3_b454_c05ef1b75167.slice/crio-dac7c868239f0347bd7d5f05c4a4aa5771badc16144487bf2dcea36ef5275da4 WatchSource:0}: Error finding container dac7c868239f0347bd7d5f05c4a4aa5771badc16144487bf2dcea36ef5275da4: Status 404 returned error can't find the container with id dac7c868239f0347bd7d5f05c4a4aa5771badc16144487bf2dcea36ef5275da4 Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.518829 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ccqbd"] Jan 30 16:02:47 crc kubenswrapper[4740]: W0130 16:02:47.521649 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16cbd370_42ef_4109_8c03_15de9af9df16.slice/crio-e394efee200a97a7d887817649c8073fb1da52fba4ad09640e3b590d916435d2 WatchSource:0}: Error finding container e394efee200a97a7d887817649c8073fb1da52fba4ad09640e3b590d916435d2: Status 404 returned error can't find the container with id e394efee200a97a7d887817649c8073fb1da52fba4ad09640e3b590d916435d2 Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.709622 4740 generic.go:334] "Generic (PLEG): container finished" podID="91bd209c-56bd-4aa3-b454-c05ef1b75167" containerID="64e251450bb1bf947c43c62ac853f6b5f6ef3bfb5c915e1ea6906851f3b799bf" exitCode=0 Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.709675 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jvn" event={"ID":"91bd209c-56bd-4aa3-b454-c05ef1b75167","Type":"ContainerDied","Data":"64e251450bb1bf947c43c62ac853f6b5f6ef3bfb5c915e1ea6906851f3b799bf"} Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.709734 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jvn" event={"ID":"91bd209c-56bd-4aa3-b454-c05ef1b75167","Type":"ContainerStarted","Data":"dac7c868239f0347bd7d5f05c4a4aa5771badc16144487bf2dcea36ef5275da4"} Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.714833 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerStarted","Data":"abbffee3fb9a786cad672d3252c0238e0f9615dc9ea38ec67cd91340a568aa92"} Jan 30 16:02:47 crc kubenswrapper[4740]: I0130 16:02:47.714886 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerStarted","Data":"e394efee200a97a7d887817649c8073fb1da52fba4ad09640e3b590d916435d2"} Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.725252 4740 generic.go:334] "Generic (PLEG): container finished" podID="16cbd370-42ef-4109-8c03-15de9af9df16" containerID="abbffee3fb9a786cad672d3252c0238e0f9615dc9ea38ec67cd91340a568aa92" exitCode=0 Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.725429 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerDied","Data":"abbffee3fb9a786cad672d3252c0238e0f9615dc9ea38ec67cd91340a568aa92"} Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.960560 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wbq99"] Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.962008 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.964557 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:02:48 crc kubenswrapper[4740]: I0130 16:02:48.971078 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbq99"] Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.064996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj59v\" (UniqueName: \"kubernetes.io/projected/6ca42d55-0b57-49b3-a072-a4b2a91333e1-kube-api-access-nj59v\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.065103 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-utilities\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.065431 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-catalog-content\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.159057 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wj2w7"] Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.160424 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.164153 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.166622 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj59v\" (UniqueName: \"kubernetes.io/projected/6ca42d55-0b57-49b3-a072-a4b2a91333e1-kube-api-access-nj59v\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.166726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-utilities\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.166797 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-catalog-content\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.167692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-catalog-content\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.167721 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca42d55-0b57-49b3-a072-a4b2a91333e1-utilities\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.181844 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj2w7"] Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.203847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj59v\" (UniqueName: \"kubernetes.io/projected/6ca42d55-0b57-49b3-a072-a4b2a91333e1-kube-api-access-nj59v\") pod \"redhat-marketplace-wbq99\" (UID: \"6ca42d55-0b57-49b3-a072-a4b2a91333e1\") " pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.268534 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-utilities\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.268616 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ww2q\" (UniqueName: \"kubernetes.io/projected/fe140203-75a6-4e94-84ab-1645cc026308-kube-api-access-2ww2q\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.268958 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-catalog-content\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.320549 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.372633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-utilities\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.372714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ww2q\" (UniqueName: \"kubernetes.io/projected/fe140203-75a6-4e94-84ab-1645cc026308-kube-api-access-2ww2q\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.372760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-catalog-content\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.373397 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-utilities\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.374409 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe140203-75a6-4e94-84ab-1645cc026308-catalog-content\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.403325 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ww2q\" (UniqueName: \"kubernetes.io/projected/fe140203-75a6-4e94-84ab-1645cc026308-kube-api-access-2ww2q\") pod \"redhat-operators-wj2w7\" (UID: \"fe140203-75a6-4e94-84ab-1645cc026308\") " pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.480954 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.743143 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerStarted","Data":"7fc624a899cfeb478675df600c58c84dedfbcae2a9a56eb8f43ec0847b28a93e"} Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.745312 4740 generic.go:334] "Generic (PLEG): container finished" podID="91bd209c-56bd-4aa3-b454-c05ef1b75167" containerID="d0c634412d6a8fb5f46c63ce1a0d24879d7681bc736c0f7626c72822ea77d43d" exitCode=0 Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.745404 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jvn" event={"ID":"91bd209c-56bd-4aa3-b454-c05ef1b75167","Type":"ContainerDied","Data":"d0c634412d6a8fb5f46c63ce1a0d24879d7681bc736c0f7626c72822ea77d43d"} Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.818503 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbq99"] Jan 30 16:02:49 crc kubenswrapper[4740]: I0130 16:02:49.889197 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wj2w7"] Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.754247 4740 generic.go:334] "Generic (PLEG): container finished" podID="16cbd370-42ef-4109-8c03-15de9af9df16" containerID="7fc624a899cfeb478675df600c58c84dedfbcae2a9a56eb8f43ec0847b28a93e" exitCode=0 Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.754325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerDied","Data":"7fc624a899cfeb478675df600c58c84dedfbcae2a9a56eb8f43ec0847b28a93e"} Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.758594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jvn" event={"ID":"91bd209c-56bd-4aa3-b454-c05ef1b75167","Type":"ContainerStarted","Data":"56da856e7e4b06473ccd50c7adea8bd362284b60172f65f1c993e86d3b9c211b"} Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.767187 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca42d55-0b57-49b3-a072-a4b2a91333e1" containerID="52028af4cc7ca1f1a9d0e9f22f063ccee6efb12c5803baeac7d154fd0759c4eb" exitCode=0 Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.767628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbq99" event={"ID":"6ca42d55-0b57-49b3-a072-a4b2a91333e1","Type":"ContainerDied","Data":"52028af4cc7ca1f1a9d0e9f22f063ccee6efb12c5803baeac7d154fd0759c4eb"} Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.767689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbq99" event={"ID":"6ca42d55-0b57-49b3-a072-a4b2a91333e1","Type":"ContainerStarted","Data":"12df43aa6ef561bbc7bcd2dfebe89f820379db5cedb53b9ed22f9df6fbc2187c"} Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.769280 4740 generic.go:334] "Generic (PLEG): container finished" podID="fe140203-75a6-4e94-84ab-1645cc026308" containerID="c241e0411e6a52f65605fed920e4a2305fd0553fcdfa68dc8f11841154e97a2c" exitCode=0 Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.769325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj2w7" event={"ID":"fe140203-75a6-4e94-84ab-1645cc026308","Type":"ContainerDied","Data":"c241e0411e6a52f65605fed920e4a2305fd0553fcdfa68dc8f11841154e97a2c"} Jan 30 16:02:50 crc kubenswrapper[4740]: I0130 16:02:50.769371 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj2w7" event={"ID":"fe140203-75a6-4e94-84ab-1645cc026308","Type":"ContainerStarted","Data":"b4fa2c57c9af50f39a5e46f9f8736f34f9706ab09e7e0f369a8d658c0ec6cc05"} Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.781993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj2w7" event={"ID":"fe140203-75a6-4e94-84ab-1645cc026308","Type":"ContainerStarted","Data":"a66634e9b967239048e605082b00ba492524eb410ffd1c03064217858c3c0687"} Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.784810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccqbd" event={"ID":"16cbd370-42ef-4109-8c03-15de9af9df16","Type":"ContainerStarted","Data":"adedf2db1801249ac11eeff0592b9420fe7ed2a4357587327657fd9e8b02466c"} Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.787323 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbq99" event={"ID":"6ca42d55-0b57-49b3-a072-a4b2a91333e1","Type":"ContainerDied","Data":"83c98c1ed7b5747bd289e16b9faa845688038d2d191453e1fae0f1809f0699cf"} Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.787215 4740 generic.go:334] "Generic (PLEG): container finished" podID="6ca42d55-0b57-49b3-a072-a4b2a91333e1" containerID="83c98c1ed7b5747bd289e16b9faa845688038d2d191453e1fae0f1809f0699cf" exitCode=0 Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.819444 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5jvn" podStartSLOduration=3.099726447 podStartE2EDuration="5.819419417s" podCreationTimestamp="2026-01-30 16:02:46 +0000 UTC" firstStartedPulling="2026-01-30 16:02:47.712738013 +0000 UTC m=+416.349800612" lastFinishedPulling="2026-01-30 16:02:50.432430973 +0000 UTC m=+419.069493582" observedRunningTime="2026-01-30 16:02:50.844866688 +0000 UTC m=+419.481929287" watchObservedRunningTime="2026-01-30 16:02:51.819419417 +0000 UTC m=+420.456482026" Jan 30 16:02:51 crc kubenswrapper[4740]: I0130 16:02:51.860738 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ccqbd" podStartSLOduration=3.368515118 podStartE2EDuration="5.860716828s" podCreationTimestamp="2026-01-30 16:02:46 +0000 UTC" firstStartedPulling="2026-01-30 16:02:48.729547958 +0000 UTC m=+417.366610577" lastFinishedPulling="2026-01-30 16:02:51.221749688 +0000 UTC m=+419.858812287" observedRunningTime="2026-01-30 16:02:51.859858186 +0000 UTC m=+420.496920785" watchObservedRunningTime="2026-01-30 16:02:51.860716828 +0000 UTC m=+420.497779427" Jan 30 16:02:52 crc kubenswrapper[4740]: I0130 16:02:52.799455 4740 generic.go:334] "Generic (PLEG): container finished" podID="fe140203-75a6-4e94-84ab-1645cc026308" containerID="a66634e9b967239048e605082b00ba492524eb410ffd1c03064217858c3c0687" exitCode=0 Jan 30 16:02:52 crc kubenswrapper[4740]: I0130 16:02:52.799546 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj2w7" event={"ID":"fe140203-75a6-4e94-84ab-1645cc026308","Type":"ContainerDied","Data":"a66634e9b967239048e605082b00ba492524eb410ffd1c03064217858c3c0687"} Jan 30 16:02:52 crc kubenswrapper[4740]: I0130 16:02:52.805974 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbq99" event={"ID":"6ca42d55-0b57-49b3-a072-a4b2a91333e1","Type":"ContainerStarted","Data":"7368b4b63e63c0ec7ea6f960d61fccb96faca9024164437996a85167564f934d"} Jan 30 16:02:52 crc kubenswrapper[4740]: I0130 16:02:52.839882 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wbq99" podStartSLOduration=3.378752955 podStartE2EDuration="4.839857365s" podCreationTimestamp="2026-01-30 16:02:48 +0000 UTC" firstStartedPulling="2026-01-30 16:02:50.76908471 +0000 UTC m=+419.406147319" lastFinishedPulling="2026-01-30 16:02:52.23018913 +0000 UTC m=+420.867251729" observedRunningTime="2026-01-30 16:02:52.830728163 +0000 UTC m=+421.467790762" watchObservedRunningTime="2026-01-30 16:02:52.839857365 +0000 UTC m=+421.476919964" Jan 30 16:02:53 crc kubenswrapper[4740]: I0130 16:02:53.815549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wj2w7" event={"ID":"fe140203-75a6-4e94-84ab-1645cc026308","Type":"ContainerStarted","Data":"80ceec4729561a372d66967e6240150ba3229a13c45daa9c6a2b14014b69ea43"} Jan 30 16:02:54 crc kubenswrapper[4740]: I0130 16:02:54.455274 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:02:54 crc kubenswrapper[4740]: I0130 16:02:54.455410 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:02:56 crc kubenswrapper[4740]: I0130 16:02:56.899392 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:56 crc kubenswrapper[4740]: I0130 16:02:56.900226 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:56 crc kubenswrapper[4740]: I0130 16:02:56.950923 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:56 crc kubenswrapper[4740]: I0130 16:02:56.974556 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wj2w7" podStartSLOduration=5.478828484 podStartE2EDuration="7.974526442s" podCreationTimestamp="2026-01-30 16:02:49 +0000 UTC" firstStartedPulling="2026-01-30 16:02:50.7710379 +0000 UTC m=+419.408100509" lastFinishedPulling="2026-01-30 16:02:53.266735868 +0000 UTC m=+421.903798467" observedRunningTime="2026-01-30 16:02:53.834078276 +0000 UTC m=+422.471140875" watchObservedRunningTime="2026-01-30 16:02:56.974526442 +0000 UTC m=+425.611589041" Jan 30 16:02:57 crc kubenswrapper[4740]: I0130 16:02:57.097803 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:57 crc kubenswrapper[4740]: I0130 16:02:57.100503 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:57 crc kubenswrapper[4740]: I0130 16:02:57.142977 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:57 crc kubenswrapper[4740]: I0130 16:02:57.889729 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5jvn" Jan 30 16:02:57 crc kubenswrapper[4740]: I0130 16:02:57.899779 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ccqbd" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.321654 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.321729 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.371258 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.482563 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.482633 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:02:59 crc kubenswrapper[4740]: I0130 16:02:59.920570 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wbq99" Jan 30 16:03:00 crc kubenswrapper[4740]: I0130 16:03:00.519642 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wj2w7" podUID="fe140203-75a6-4e94-84ab-1645cc026308" containerName="registry-server" probeResult="failure" output=< Jan 30 16:03:00 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:03:00 crc kubenswrapper[4740]: > Jan 30 16:03:09 crc kubenswrapper[4740]: I0130 16:03:09.531072 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:03:09 crc kubenswrapper[4740]: I0130 16:03:09.577206 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wj2w7" Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.351482 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" podUID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" containerName="registry" containerID="cri-o://d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860" gracePeriod=30 Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.925299 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.944383 4740 generic.go:334] "Generic (PLEG): container finished" podID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" containerID="d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860" exitCode=0 Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.944443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" event={"ID":"a3bbedcf-9070-4b3b-a515-bfac82c6c83f","Type":"ContainerDied","Data":"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860"} Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.944497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" event={"ID":"a3bbedcf-9070-4b3b-a515-bfac82c6c83f","Type":"ContainerDied","Data":"6df14f7d2800454fff58c5a180c5d90eeed4d748fe2173ffcfc7f47950a996a8"} Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.944515 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7qkld" Jan 30 16:03:10 crc kubenswrapper[4740]: I0130 16:03:10.944531 4740 scope.go:117] "RemoveContainer" containerID="d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.000076 4740 scope.go:117] "RemoveContainer" containerID="d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860" Jan 30 16:03:11 crc kubenswrapper[4740]: E0130 16:03:11.000593 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860\": container with ID starting with d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860 not found: ID does not exist" containerID="d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.000638 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860"} err="failed to get container status \"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860\": rpc error: code = NotFound desc = could not find container \"d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860\": container with ID starting with d0ddc11a74de1faa696ccb15c6fa7af38c3d2731552427075ef144620d62d860 not found: ID does not exist" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.070429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.070551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.070827 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.070899 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.071791 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072199 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072251 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8t6x\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072298 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072351 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets\") pod \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\" (UID: \"a3bbedcf-9070-4b3b-a515-bfac82c6c83f\") " Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072779 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.072799 4740 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.079536 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.084129 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x" (OuterVolumeSpecName: "kube-api-access-d8t6x") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "kube-api-access-d8t6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.084542 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.084562 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.085002 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.092695 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a3bbedcf-9070-4b3b-a515-bfac82c6c83f" (UID: "a3bbedcf-9070-4b3b-a515-bfac82c6c83f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.174811 4740 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.174867 4740 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.174883 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8t6x\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-kube-api-access-d8t6x\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.174904 4740 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.174918 4740 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a3bbedcf-9070-4b3b-a515-bfac82c6c83f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.292429 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.300298 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7qkld"] Jan 30 16:03:11 crc kubenswrapper[4740]: I0130 16:03:11.349143 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" path="/var/lib/kubelet/pods/a3bbedcf-9070-4b3b-a515-bfac82c6c83f/volumes" Jan 30 16:03:24 crc kubenswrapper[4740]: I0130 16:03:24.455186 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:03:24 crc kubenswrapper[4740]: I0130 16:03:24.456313 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:03:24 crc kubenswrapper[4740]: I0130 16:03:24.456471 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:03:24 crc kubenswrapper[4740]: I0130 16:03:24.458543 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:03:24 crc kubenswrapper[4740]: I0130 16:03:24.458625 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66" gracePeriod=600 Jan 30 16:03:25 crc kubenswrapper[4740]: I0130 16:03:25.053677 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66" exitCode=0 Jan 30 16:03:25 crc kubenswrapper[4740]: I0130 16:03:25.053766 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66"} Jan 30 16:03:25 crc kubenswrapper[4740]: I0130 16:03:25.054124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531"} Jan 30 16:03:25 crc kubenswrapper[4740]: I0130 16:03:25.054158 4740 scope.go:117] "RemoveContainer" containerID="84a466c89f8d791cc7851c2f9bf92aad6c088e2815c9139125a190f28fed6f24" Jan 30 16:05:24 crc kubenswrapper[4740]: I0130 16:05:24.455399 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:05:24 crc kubenswrapper[4740]: I0130 16:05:24.456098 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:05:54 crc kubenswrapper[4740]: I0130 16:05:54.454940 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:05:54 crc kubenswrapper[4740]: I0130 16:05:54.455957 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:06:24 crc kubenswrapper[4740]: I0130 16:06:24.455181 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:06:24 crc kubenswrapper[4740]: I0130 16:06:24.456050 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:06:24 crc kubenswrapper[4740]: I0130 16:06:24.456162 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:06:24 crc kubenswrapper[4740]: I0130 16:06:24.457650 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:06:24 crc kubenswrapper[4740]: I0130 16:06:24.457765 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531" gracePeriod=600 Jan 30 16:06:25 crc kubenswrapper[4740]: I0130 16:06:25.516819 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531" exitCode=0 Jan 30 16:06:25 crc kubenswrapper[4740]: I0130 16:06:25.516891 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531"} Jan 30 16:06:25 crc kubenswrapper[4740]: I0130 16:06:25.517763 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865"} Jan 30 16:06:25 crc kubenswrapper[4740]: I0130 16:06:25.517833 4740 scope.go:117] "RemoveContainer" containerID="182b2049b868464ce8ca9205690ce1e2d3fed3750b415a7f5760d99b98292d66" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.039830 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv"] Jan 30 16:07:48 crc kubenswrapper[4740]: E0130 16:07:48.040971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" containerName="registry" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.040993 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" containerName="registry" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.041135 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bbedcf-9070-4b3b-a515-bfac82c6c83f" containerName="registry" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.042213 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.045184 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.057735 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv"] Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.202932 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.203026 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcgdm\" (UniqueName: \"kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.203149 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.304760 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.304831 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.304867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcgdm\" (UniqueName: \"kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.305758 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.306022 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.325830 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcgdm\" (UniqueName: \"kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:48 crc kubenswrapper[4740]: I0130 16:07:48.365003 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:49 crc kubenswrapper[4740]: I0130 16:07:49.207852 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv"] Jan 30 16:07:50 crc kubenswrapper[4740]: I0130 16:07:50.129533 4740 generic.go:334] "Generic (PLEG): container finished" podID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerID="d91c66f43c112d3a05d5e4345936459034e946aef0cd336904b08ae7ce59a2fe" exitCode=0 Jan 30 16:07:50 crc kubenswrapper[4740]: I0130 16:07:50.129647 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" event={"ID":"d36022b6-9743-4c56-bdcf-b10ce676d3ac","Type":"ContainerDied","Data":"d91c66f43c112d3a05d5e4345936459034e946aef0cd336904b08ae7ce59a2fe"} Jan 30 16:07:50 crc kubenswrapper[4740]: I0130 16:07:50.130034 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" event={"ID":"d36022b6-9743-4c56-bdcf-b10ce676d3ac","Type":"ContainerStarted","Data":"8b56e7b0a2d0a6223277bb749d6cd67c0c5d71d1e9c694b27b73f47907c2fcea"} Jan 30 16:07:50 crc kubenswrapper[4740]: I0130 16:07:50.131966 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:07:53 crc kubenswrapper[4740]: I0130 16:07:53.155683 4740 generic.go:334] "Generic (PLEG): container finished" podID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerID="27f10fca57fb7e6dc23dfa655b0b0b4e89304ab2404ad238b2223defa5df841e" exitCode=0 Jan 30 16:07:53 crc kubenswrapper[4740]: I0130 16:07:53.155804 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" event={"ID":"d36022b6-9743-4c56-bdcf-b10ce676d3ac","Type":"ContainerDied","Data":"27f10fca57fb7e6dc23dfa655b0b0b4e89304ab2404ad238b2223defa5df841e"} Jan 30 16:07:54 crc kubenswrapper[4740]: I0130 16:07:54.166977 4740 generic.go:334] "Generic (PLEG): container finished" podID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerID="3797862c1eb46eca5788ef8fa301dd7db7baf391952eea0c10377c0dadab7b0c" exitCode=0 Jan 30 16:07:54 crc kubenswrapper[4740]: I0130 16:07:54.167119 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" event={"ID":"d36022b6-9743-4c56-bdcf-b10ce676d3ac","Type":"ContainerDied","Data":"3797862c1eb46eca5788ef8fa301dd7db7baf391952eea0c10377c0dadab7b0c"} Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.514569 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.627285 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle\") pod \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.627456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util\") pod \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.627673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcgdm\" (UniqueName: \"kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm\") pod \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\" (UID: \"d36022b6-9743-4c56-bdcf-b10ce676d3ac\") " Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.631665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle" (OuterVolumeSpecName: "bundle") pod "d36022b6-9743-4c56-bdcf-b10ce676d3ac" (UID: "d36022b6-9743-4c56-bdcf-b10ce676d3ac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.638465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm" (OuterVolumeSpecName: "kube-api-access-zcgdm") pod "d36022b6-9743-4c56-bdcf-b10ce676d3ac" (UID: "d36022b6-9743-4c56-bdcf-b10ce676d3ac"). InnerVolumeSpecName "kube-api-access-zcgdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.639031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util" (OuterVolumeSpecName: "util") pod "d36022b6-9743-4c56-bdcf-b10ce676d3ac" (UID: "d36022b6-9743-4c56-bdcf-b10ce676d3ac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.729583 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcgdm\" (UniqueName: \"kubernetes.io/projected/d36022b6-9743-4c56-bdcf-b10ce676d3ac-kube-api-access-zcgdm\") on node \"crc\" DevicePath \"\"" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.729626 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:07:55 crc kubenswrapper[4740]: I0130 16:07:55.729641 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d36022b6-9743-4c56-bdcf-b10ce676d3ac-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:07:56 crc kubenswrapper[4740]: I0130 16:07:56.187936 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" event={"ID":"d36022b6-9743-4c56-bdcf-b10ce676d3ac","Type":"ContainerDied","Data":"8b56e7b0a2d0a6223277bb749d6cd67c0c5d71d1e9c694b27b73f47907c2fcea"} Jan 30 16:07:56 crc kubenswrapper[4740]: I0130 16:07:56.188013 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b56e7b0a2d0a6223277bb749d6cd67c0c5d71d1e9c694b27b73f47907c2fcea" Jan 30 16:07:56 crc kubenswrapper[4740]: I0130 16:07:56.188659 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.509716 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhsjm"] Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.511913 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-controller" containerID="cri-o://20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512005 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="nbdb" containerID="cri-o://de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512050 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512212 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-node" containerID="cri-o://3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512145 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="northd" containerID="cri-o://73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512265 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-acl-logging" containerID="cri-o://70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.512190 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="sbdb" containerID="cri-o://2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.548037 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" containerID="cri-o://1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" gracePeriod=30 Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.893092 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/3.log" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.896303 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovn-acl-logging/0.log" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.897160 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovn-controller/0.log" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.897734 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.921749 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v"] Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.921996 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="northd" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922010 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="northd" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922019 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922025 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922033 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="extract" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922039 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="extract" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922047 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922053 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922063 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="sbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922069 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="sbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922079 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922093 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="pull" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922099 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="pull" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922109 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922118 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922126 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-node" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922133 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-node" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922143 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922150 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922157 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="util" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922163 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="util" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922169 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="nbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922175 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="nbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922183 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-acl-logging" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922188 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-acl-logging" Jan 30 16:08:05 crc kubenswrapper[4740]: E0130 16:08:05.922199 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kubecfg-setup" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922205 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kubecfg-setup" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922289 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922296 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922303 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-acl-logging" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922314 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922321 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="nbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922329 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36022b6-9743-4c56-bdcf-b10ce676d3ac" containerName="extract" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922336 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922342 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovn-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922369 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="sbdb" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922377 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="northd" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922385 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="kube-rbac-proxy-node" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922411 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.922897 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.924454 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.925390 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-29tnx" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.925596 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970101 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970135 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970183 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970219 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hnwb\" (UniqueName: \"kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970227 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970272 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970260 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970303 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket" (OuterVolumeSpecName: "log-socket") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970375 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970440 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970464 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970488 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970511 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970567 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970632 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970654 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970681 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970748 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970770 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970775 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.970823 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn\") pod \"2c06ab51-b857-47c7-a13a-e64edae96756\" (UID: \"2c06ab51-b857-47c7-a13a-e64edae96756\") " Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971041 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971052 4740 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971053 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971062 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971079 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971085 4740 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971100 4740 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971101 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log" (OuterVolumeSpecName: "node-log") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971131 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash" (OuterVolumeSpecName: "host-slash") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971141 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971180 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971207 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971244 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.971447 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.986099 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.986147 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb" (OuterVolumeSpecName: "kube-api-access-6hnwb") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "kube-api-access-6hnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:08:05 crc kubenswrapper[4740]: I0130 16:08:05.992425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2c06ab51-b857-47c7-a13a-e64edae96756" (UID: "2c06ab51-b857-47c7-a13a-e64edae96756"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.023757 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zkjpr"] Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.024011 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.024028 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.024043 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.024053 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.024203 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" containerName="ovnkube-controller" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.026581 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.062040 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.063073 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.069006 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qrcbp" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072158 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4srbp\" (UniqueName: \"kubernetes.io/projected/7fed297b-1b60-4fa1-81ad-f7aff661624d-kube-api-access-4srbp\") pod \"obo-prometheus-operator-68bc856cb9-fl52v\" (UID: \"7fed297b-1b60-4fa1-81ad-f7aff661624d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072284 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2c06ab51-b857-47c7-a13a-e64edae96756-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072307 4740 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072319 4740 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072331 4740 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072342 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072371 4740 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072382 4740 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072393 4740 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072408 4740 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072423 4740 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072435 4740 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072447 4740 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2c06ab51-b857-47c7-a13a-e64edae96756-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072458 4740 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072469 4740 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2c06ab51-b857-47c7-a13a-e64edae96756-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.072480 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hnwb\" (UniqueName: \"kubernetes.io/projected/2c06ab51-b857-47c7-a13a-e64edae96756-kube-api-access-6hnwb\") on node \"crc\" DevicePath \"\"" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.098965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.099777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-netd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4srbp\" (UniqueName: \"kubernetes.io/projected/7fed297b-1b60-4fa1-81ad-f7aff661624d-kube-api-access-4srbp\") pod \"obo-prometheus-operator-68bc856cb9-fl52v\" (UID: \"7fed297b-1b60-4fa1-81ad-f7aff661624d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174159 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-var-lib-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-netns\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174212 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-script-lib\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174248 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovn-node-metrics-cert\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174295 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-ovn\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-env-overrides\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174825 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-config\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-systemd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-slash\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174944 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.174975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-systemd-units\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175060 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-bin\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175143 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175172 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-etc-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8ms\" (UniqueName: \"kubernetes.io/projected/726a4a2c-0639-47f2-8aa6-47c5c92708da-kube-api-access-5s8ms\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175399 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175475 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-log-socket\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-kubelet\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.175575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-node-log\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.192120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4srbp\" (UniqueName: \"kubernetes.io/projected/7fed297b-1b60-4fa1-81ad-f7aff661624d-kube-api-access-4srbp\") pod \"obo-prometheus-operator-68bc856cb9-fl52v\" (UID: \"7fed297b-1b60-4fa1-81ad-f7aff661624d\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.240844 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.249132 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/2.log" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.249657 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/1.log" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.249715 4740 generic.go:334] "Generic (PLEG): container finished" podID="e65088cb-e700-4af1-b788-af399f918bd0" containerID="4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c" exitCode=2 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.249804 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerDied","Data":"4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.249881 4740 scope.go:117] "RemoveContainer" containerID="1a67e2cfddeace852690caa03a4f1aac97554cc77b358592363b589c6332ac46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.250944 4740 scope.go:117] "RemoveContainer" containerID="4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.252888 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovnkube-controller/3.log" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.254069 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pkzlw_openshift-multus(e65088cb-e700-4af1-b788-af399f918bd0)\"" pod="openshift-multus/multus-pkzlw" podUID="e65088cb-e700-4af1-b788-af399f918bd0" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.267642 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovn-acl-logging/0.log" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268457 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhsjm_2c06ab51-b857-47c7-a13a-e64edae96756/ovn-controller/0.log" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268804 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268834 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268844 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268854 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268865 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268874 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" exitCode=0 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268886 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" exitCode=143 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268896 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c06ab51-b857-47c7-a13a-e64edae96756" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" exitCode=143 Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.268998 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.269010 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.269022 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.269039 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.269064 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.271743 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277809 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277873 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277888 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277897 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277904 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277911 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277917 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277924 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.277996 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278007 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278013 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278019 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278025 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278031 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278037 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278044 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278050 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278056 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278069 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pdgvg"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.278993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279020 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279030 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279039 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279046 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279054 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279062 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279068 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279075 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279082 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279088 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279098 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhsjm" event={"ID":"2c06ab51-b857-47c7-a13a-e64edae96756","Type":"ContainerDied","Data":"ec052aa91ddf29205cfa35c0846942ec93588c0c6cc2314d26df2c9ef6ca3057"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279109 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279116 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279123 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279129 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279136 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279142 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279148 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279154 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279163 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279169 4740 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.279278 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281179 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-node-log\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281235 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-netd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-var-lib-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-netns\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281305 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-script-lib\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281337 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovn-node-metrics-cert\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281397 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-ovn\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281428 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-env-overrides\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-config\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281470 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-systemd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281485 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-slash\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281510 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281539 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-systemd-units\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-bin\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-etc-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8ms\" (UniqueName: \"kubernetes.io/projected/726a4a2c-0639-47f2-8aa6-47c5c92708da-kube-api-access-5s8ms\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281753 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-log-socket\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-kubelet\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.281839 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-kubelet\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-env-overrides\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-node-log\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282602 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-netd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282639 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-var-lib-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-cni-bin\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282660 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-config\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-systemd\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-netns\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.282759 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-slash\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.283251 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovnkube-script-lib\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.283307 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.285395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-run-ovn\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.285490 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-systemd-units\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.285522 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-etc-openvswitch\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286222 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-4p6p6" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286435 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-host-run-ovn-kubernetes\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286565 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/726a4a2c-0639-47f2-8aa6-47c5c92708da-log-socket\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.286832 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.287737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/726a4a2c-0639-47f2-8aa6-47c5c92708da-ovn-node-metrics-cert\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.293866 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/27f815e6-2917-46af-8a6d-4bcd66c35042-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h\" (UID: \"27f815e6-2917-46af-8a6d-4bcd66c35042\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.296584 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(5dd7215b51fddfadbb3b42b4b2d5a8a4e45be347f1ad06d95cc5ee909ff3ead5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.296654 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(5dd7215b51fddfadbb3b42b4b2d5a8a4e45be347f1ad06d95cc5ee909ff3ead5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.296684 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(5dd7215b51fddfadbb3b42b4b2d5a8a4e45be347f1ad06d95cc5ee909ff3ead5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.296749 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(5dd7215b51fddfadbb3b42b4b2d5a8a4e45be347f1ad06d95cc5ee909ff3ead5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" podUID="7fed297b-1b60-4fa1-81ad-f7aff661624d" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.311270 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.315146 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8ms\" (UniqueName: \"kubernetes.io/projected/726a4a2c-0639-47f2-8aa6-47c5c92708da-kube-api-access-5s8ms\") pod \"ovnkube-node-zkjpr\" (UID: \"726a4a2c-0639-47f2-8aa6-47c5c92708da\") " pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.316386 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e70968d1-7497-4724-9c80-cf5abdf288ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b\" (UID: \"e70968d1-7497-4724-9c80-cf5abdf288ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.327723 4740 scope.go:117] "RemoveContainer" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.342309 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.349768 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.367001 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhsjm"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.371991 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhsjm"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.379425 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.383142 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrsbd\" (UniqueName: \"kubernetes.io/projected/6a0acde2-70b4-4622-a609-290cbc5f253f-kube-api-access-hrsbd\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.383177 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a0acde2-70b4-4622-a609-290cbc5f253f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.387531 4740 scope.go:117] "RemoveContainer" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.406294 4740 scope.go:117] "RemoveContainer" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.422714 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(166c4e138bce82b39ce080ea99fe907d7c8c5d1bbd432de75257121290f9e180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.422841 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(166c4e138bce82b39ce080ea99fe907d7c8c5d1bbd432de75257121290f9e180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.422869 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(166c4e138bce82b39ce080ea99fe907d7c8c5d1bbd432de75257121290f9e180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.422942 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(166c4e138bce82b39ce080ea99fe907d7c8c5d1bbd432de75257121290f9e180): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" podUID="27f815e6-2917-46af-8a6d-4bcd66c35042" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.423621 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.452887 4740 scope.go:117] "RemoveContainer" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.456269 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2zbm"] Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.457132 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.459444 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-wf8ck" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.490694 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrsbd\" (UniqueName: \"kubernetes.io/projected/6a0acde2-70b4-4622-a609-290cbc5f253f-kube-api-access-hrsbd\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.491075 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a0acde2-70b4-4622-a609-290cbc5f253f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.502485 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a0acde2-70b4-4622-a609-290cbc5f253f-observability-operator-tls\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.508175 4740 scope.go:117] "RemoveContainer" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.518645 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(48c60dc16f70ae26369af368ad442b441736634da76d30caba84d1127d1d11fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.518729 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(48c60dc16f70ae26369af368ad442b441736634da76d30caba84d1127d1d11fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.518753 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(48c60dc16f70ae26369af368ad442b441736634da76d30caba84d1127d1d11fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.518811 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(48c60dc16f70ae26369af368ad442b441736634da76d30caba84d1127d1d11fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" podUID="e70968d1-7497-4724-9c80-cf5abdf288ea" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.530513 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrsbd\" (UniqueName: \"kubernetes.io/projected/6a0acde2-70b4-4622-a609-290cbc5f253f-kube-api-access-hrsbd\") pod \"observability-operator-59bdc8b94-pdgvg\" (UID: \"6a0acde2-70b4-4622-a609-290cbc5f253f\") " pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.543672 4740 scope.go:117] "RemoveContainer" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.567641 4740 scope.go:117] "RemoveContainer" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.592116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/522756c7-f451-4879-b2b3-2d19b80cb751-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.592598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp76\" (UniqueName: \"kubernetes.io/projected/522756c7-f451-4879-b2b3-2d19b80cb751-kube-api-access-kcp76\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.593465 4740 scope.go:117] "RemoveContainer" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.608923 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.635053 4740 scope.go:117] "RemoveContainer" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.660698 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f0d265595c236b6e87b1bd594981ada2eccb14975646e18d3cb65bd28676c14a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.660808 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f0d265595c236b6e87b1bd594981ada2eccb14975646e18d3cb65bd28676c14a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.660840 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f0d265595c236b6e87b1bd594981ada2eccb14975646e18d3cb65bd28676c14a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.660902 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f0d265595c236b6e87b1bd594981ada2eccb14975646e18d3cb65bd28676c14a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" podUID="6a0acde2-70b4-4622-a609-290cbc5f253f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.691194 4740 scope.go:117] "RemoveContainer" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.691730 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": container with ID starting with 1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84 not found: ID does not exist" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.691778 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} err="failed to get container status \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": rpc error: code = NotFound desc = could not find container \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": container with ID starting with 1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.691801 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.692068 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": container with ID starting with c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f not found: ID does not exist" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692087 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} err="failed to get container status \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": rpc error: code = NotFound desc = could not find container \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": container with ID starting with c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692101 4740 scope.go:117] "RemoveContainer" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.692346 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": container with ID starting with 2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc not found: ID does not exist" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692383 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} err="failed to get container status \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": rpc error: code = NotFound desc = could not find container \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": container with ID starting with 2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692401 4740 scope.go:117] "RemoveContainer" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.692830 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": container with ID starting with de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6 not found: ID does not exist" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692849 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} err="failed to get container status \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": rpc error: code = NotFound desc = could not find container \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": container with ID starting with de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.692862 4740 scope.go:117] "RemoveContainer" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.693233 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": container with ID starting with 73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9 not found: ID does not exist" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.693302 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} err="failed to get container status \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": rpc error: code = NotFound desc = could not find container \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": container with ID starting with 73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.693373 4740 scope.go:117] "RemoveContainer" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.693644 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp76\" (UniqueName: \"kubernetes.io/projected/522756c7-f451-4879-b2b3-2d19b80cb751-kube-api-access-kcp76\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.693727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/522756c7-f451-4879-b2b3-2d19b80cb751-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.694103 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": container with ID starting with f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952 not found: ID does not exist" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.694142 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} err="failed to get container status \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": rpc error: code = NotFound desc = could not find container \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": container with ID starting with f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.694165 4740 scope.go:117] "RemoveContainer" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.694581 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": container with ID starting with 3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef not found: ID does not exist" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.694610 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} err="failed to get container status \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": rpc error: code = NotFound desc = could not find container \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": container with ID starting with 3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.694630 4740 scope.go:117] "RemoveContainer" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.694750 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/522756c7-f451-4879-b2b3-2d19b80cb751-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.695733 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": container with ID starting with 70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46 not found: ID does not exist" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.695773 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} err="failed to get container status \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": rpc error: code = NotFound desc = could not find container \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": container with ID starting with 70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.695807 4740 scope.go:117] "RemoveContainer" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.696251 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": container with ID starting with 20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581 not found: ID does not exist" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.696296 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} err="failed to get container status \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": rpc error: code = NotFound desc = could not find container \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": container with ID starting with 20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.696320 4740 scope.go:117] "RemoveContainer" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.696635 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": container with ID starting with e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b not found: ID does not exist" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.696667 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} err="failed to get container status \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": rpc error: code = NotFound desc = could not find container \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": container with ID starting with e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.696686 4740 scope.go:117] "RemoveContainer" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697061 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} err="failed to get container status \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": rpc error: code = NotFound desc = could not find container \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": container with ID starting with 1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697095 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697330 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} err="failed to get container status \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": rpc error: code = NotFound desc = could not find container \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": container with ID starting with c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697378 4740 scope.go:117] "RemoveContainer" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697726 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} err="failed to get container status \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": rpc error: code = NotFound desc = could not find container \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": container with ID starting with 2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697756 4740 scope.go:117] "RemoveContainer" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.697986 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} err="failed to get container status \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": rpc error: code = NotFound desc = could not find container \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": container with ID starting with de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.698015 4740 scope.go:117] "RemoveContainer" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.698450 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} err="failed to get container status \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": rpc error: code = NotFound desc = could not find container \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": container with ID starting with 73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.698476 4740 scope.go:117] "RemoveContainer" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.698739 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} err="failed to get container status \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": rpc error: code = NotFound desc = could not find container \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": container with ID starting with f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.698767 4740 scope.go:117] "RemoveContainer" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.699112 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} err="failed to get container status \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": rpc error: code = NotFound desc = could not find container \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": container with ID starting with 3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.699136 4740 scope.go:117] "RemoveContainer" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.699889 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} err="failed to get container status \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": rpc error: code = NotFound desc = could not find container \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": container with ID starting with 70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.699914 4740 scope.go:117] "RemoveContainer" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700123 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} err="failed to get container status \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": rpc error: code = NotFound desc = could not find container \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": container with ID starting with 20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700154 4740 scope.go:117] "RemoveContainer" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700501 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} err="failed to get container status \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": rpc error: code = NotFound desc = could not find container \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": container with ID starting with e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700528 4740 scope.go:117] "RemoveContainer" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700763 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} err="failed to get container status \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": rpc error: code = NotFound desc = could not find container \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": container with ID starting with 1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.700795 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701105 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} err="failed to get container status \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": rpc error: code = NotFound desc = could not find container \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": container with ID starting with c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701131 4740 scope.go:117] "RemoveContainer" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701441 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} err="failed to get container status \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": rpc error: code = NotFound desc = could not find container \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": container with ID starting with 2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701469 4740 scope.go:117] "RemoveContainer" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701795 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} err="failed to get container status \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": rpc error: code = NotFound desc = could not find container \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": container with ID starting with de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.701824 4740 scope.go:117] "RemoveContainer" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702086 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} err="failed to get container status \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": rpc error: code = NotFound desc = could not find container \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": container with ID starting with 73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702117 4740 scope.go:117] "RemoveContainer" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702465 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} err="failed to get container status \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": rpc error: code = NotFound desc = could not find container \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": container with ID starting with f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702493 4740 scope.go:117] "RemoveContainer" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702697 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} err="failed to get container status \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": rpc error: code = NotFound desc = could not find container \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": container with ID starting with 3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702722 4740 scope.go:117] "RemoveContainer" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702932 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} err="failed to get container status \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": rpc error: code = NotFound desc = could not find container \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": container with ID starting with 70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.702959 4740 scope.go:117] "RemoveContainer" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.703396 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} err="failed to get container status \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": rpc error: code = NotFound desc = could not find container \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": container with ID starting with 20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.703426 4740 scope.go:117] "RemoveContainer" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.703669 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} err="failed to get container status \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": rpc error: code = NotFound desc = could not find container \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": container with ID starting with e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.703694 4740 scope.go:117] "RemoveContainer" containerID="1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704012 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84"} err="failed to get container status \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": rpc error: code = NotFound desc = could not find container \"1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84\": container with ID starting with 1d080605b3c05c0c7344838a9f7df3a936966dc8f2f6f1b01f409c75113c5d84 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704037 4740 scope.go:117] "RemoveContainer" containerID="c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704274 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f"} err="failed to get container status \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": rpc error: code = NotFound desc = could not find container \"c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f\": container with ID starting with c8dc1e54716cce53de0fb3adacb9c5fb41a9539cac97749d4dac5de001c1145f not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704303 4740 scope.go:117] "RemoveContainer" containerID="2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704634 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc"} err="failed to get container status \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": rpc error: code = NotFound desc = could not find container \"2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc\": container with ID starting with 2e12c594d95ae55a715fc4b39195c91a57b6e919707411ecaf2a4d5de9b1e1fc not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704659 4740 scope.go:117] "RemoveContainer" containerID="de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704878 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6"} err="failed to get container status \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": rpc error: code = NotFound desc = could not find container \"de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6\": container with ID starting with de86b26343b03856fd726bba58b5567fc6e7005e1b154c1c06ac259798a4fca6 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.704902 4740 scope.go:117] "RemoveContainer" containerID="73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705211 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9"} err="failed to get container status \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": rpc error: code = NotFound desc = could not find container \"73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9\": container with ID starting with 73224d8f196a1991db771dc2841020c266cde457d1de995ed01c72fde53ca6a9 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705233 4740 scope.go:117] "RemoveContainer" containerID="f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705457 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952"} err="failed to get container status \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": rpc error: code = NotFound desc = could not find container \"f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952\": container with ID starting with f787ea86e0c677b1a34e2382160236b37c4a608494291ee5248da842a6742952 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705474 4740 scope.go:117] "RemoveContainer" containerID="3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705712 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef"} err="failed to get container status \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": rpc error: code = NotFound desc = could not find container \"3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef\": container with ID starting with 3f2b0a76fe60ca448aae87a995dc64c1250ced5bdfbdbae0dc778aa4723bb1ef not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.705742 4740 scope.go:117] "RemoveContainer" containerID="70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.706000 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46"} err="failed to get container status \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": rpc error: code = NotFound desc = could not find container \"70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46\": container with ID starting with 70aca26abc87ea96888ed67e56354828e3abcef120762365f8d075859dc15b46 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.706021 4740 scope.go:117] "RemoveContainer" containerID="20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.706307 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581"} err="failed to get container status \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": rpc error: code = NotFound desc = could not find container \"20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581\": container with ID starting with 20bf7a347542bc3bb8a4581044af2cb1fc71e216978deea02cb10308ecef8581 not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.706330 4740 scope.go:117] "RemoveContainer" containerID="e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.706566 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b"} err="failed to get container status \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": rpc error: code = NotFound desc = could not find container \"e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b\": container with ID starting with e2af467d46e1eaf6a01389febede73c8ccf84108f646b1cb788de9f68c18121b not found: ID does not exist" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.714778 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcp76\" (UniqueName: \"kubernetes.io/projected/522756c7-f451-4879-b2b3-2d19b80cb751-kube-api-access-kcp76\") pod \"perses-operator-5bf474d74f-r2zbm\" (UID: \"522756c7-f451-4879-b2b3-2d19b80cb751\") " pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: I0130 16:08:06.802933 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.829122 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(5bdfdee89ba755bda7aa6ba1c2a3b38cf0ba70bdf3bce3aa32327dc896719dff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.829220 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(5bdfdee89ba755bda7aa6ba1c2a3b38cf0ba70bdf3bce3aa32327dc896719dff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.829254 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(5bdfdee89ba755bda7aa6ba1c2a3b38cf0ba70bdf3bce3aa32327dc896719dff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:06 crc kubenswrapper[4740]: E0130 16:08:06.829324 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(5bdfdee89ba755bda7aa6ba1c2a3b38cf0ba70bdf3bce3aa32327dc896719dff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" podUID="522756c7-f451-4879-b2b3-2d19b80cb751" Jan 30 16:08:07 crc kubenswrapper[4740]: I0130 16:08:07.289125 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/2.log" Jan 30 16:08:07 crc kubenswrapper[4740]: I0130 16:08:07.297129 4740 generic.go:334] "Generic (PLEG): container finished" podID="726a4a2c-0639-47f2-8aa6-47c5c92708da" containerID="8ea3a501fffe74c379457f528b2b50f8b35a1fd5ecbe3075856e2fde20317d4f" exitCode=0 Jan 30 16:08:07 crc kubenswrapper[4740]: I0130 16:08:07.297183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerDied","Data":"8ea3a501fffe74c379457f528b2b50f8b35a1fd5ecbe3075856e2fde20317d4f"} Jan 30 16:08:07 crc kubenswrapper[4740]: I0130 16:08:07.297222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"c4c50f36f6156a831ba3bbf828ebbc62c842d6002970daa62e29f01a8e90a624"} Jan 30 16:08:07 crc kubenswrapper[4740]: I0130 16:08:07.346376 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c06ab51-b857-47c7-a13a-e64edae96756" path="/var/lib/kubelet/pods/2c06ab51-b857-47c7-a13a-e64edae96756/volumes" Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.307750 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"2026a7ea147ce8d7a658982326f74b1c5dea4fe8288193a870fafb0f329c7095"} Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.308194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"d1839039aebba4109f74730a9c7c974b458b459e0d4c5e73ad1ed0fb3cbd7814"} Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.308208 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"d50f61688202aa35a9deb9df3c8035ffa0809589a573fa3eef09c44971b4a2b2"} Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.308218 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"42cab5d7081c630b728d890c71f449342dbf724906dd9f80edff53d57164bd32"} Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.308227 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"1c8764b92474df1e4d8e11ef04bedf7bff598ef47b765237835738ef4872d4cf"} Jan 30 16:08:08 crc kubenswrapper[4740]: I0130 16:08:08.308235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"ef66b4f9a4a2fb1f655e554007347508d2ce1b94c1f06baeafeb7652858a9bec"} Jan 30 16:08:10 crc kubenswrapper[4740]: I0130 16:08:10.330597 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"03272131e4f5de17e9c1fcea4c49a0873d31db45c3edb1e7de79cc61a9d3e968"} Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.348791 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" event={"ID":"726a4a2c-0639-47f2-8aa6-47c5c92708da","Type":"ContainerStarted","Data":"95026b0c9f4b2ccf512618ef17276e1f59496a3a0b4515196832554633d54a15"} Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.349378 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.380697 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.385031 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" podStartSLOduration=7.385003596 podStartE2EDuration="7.385003596s" podCreationTimestamp="2026-01-30 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:08:13.383023246 +0000 UTC m=+742.020085855" watchObservedRunningTime="2026-01-30 16:08:13.385003596 +0000 UTC m=+742.022066185" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.423450 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pdgvg"] Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.423585 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.423968 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.427478 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h"] Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.427630 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.428204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.449523 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b"] Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.449682 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.454517 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v"] Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.454664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.455181 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.455188 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.464724 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2zbm"] Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.464888 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:13 crc kubenswrapper[4740]: I0130 16:08:13.465471 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.496181 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(90054ed54550f118d27861f688a44e0589603e3378b9a7e845f2ee06c01afc0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.496248 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(90054ed54550f118d27861f688a44e0589603e3378b9a7e845f2ee06c01afc0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.496273 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(90054ed54550f118d27861f688a44e0589603e3378b9a7e845f2ee06c01afc0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.496323 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(90054ed54550f118d27861f688a44e0589603e3378b9a7e845f2ee06c01afc0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" podUID="27f815e6-2917-46af-8a6d-4bcd66c35042" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.518544 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f9c16eedb0f18ec2385af40482d602603342ddb9d527c424dae509efe362b411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.518619 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f9c16eedb0f18ec2385af40482d602603342ddb9d527c424dae509efe362b411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.518645 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f9c16eedb0f18ec2385af40482d602603342ddb9d527c424dae509efe362b411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.518690 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(f9c16eedb0f18ec2385af40482d602603342ddb9d527c424dae509efe362b411): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" podUID="6a0acde2-70b4-4622-a609-290cbc5f253f" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.543567 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(a363116465f6843d3f9556f7e3ab51fcdc74c2b882e0295a06bd6f293d49f63e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.543655 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(a363116465f6843d3f9556f7e3ab51fcdc74c2b882e0295a06bd6f293d49f63e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.543679 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(a363116465f6843d3f9556f7e3ab51fcdc74c2b882e0295a06bd6f293d49f63e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.543742 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(a363116465f6843d3f9556f7e3ab51fcdc74c2b882e0295a06bd6f293d49f63e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" podUID="e70968d1-7497-4724-9c80-cf5abdf288ea" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.550551 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(fda99369768aa1e0ac63f17e4062da7a835e98657767eed66448f84e44a7afa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.550661 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(fda99369768aa1e0ac63f17e4062da7a835e98657767eed66448f84e44a7afa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.550693 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(fda99369768aa1e0ac63f17e4062da7a835e98657767eed66448f84e44a7afa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.550766 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(fda99369768aa1e0ac63f17e4062da7a835e98657767eed66448f84e44a7afa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" podUID="7fed297b-1b60-4fa1-81ad-f7aff661624d" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.556929 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(a17b75feca0aeb0ae47451a4f412f83b97aa78ee3d1c6aeae880863e618fbbdc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.556979 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(a17b75feca0aeb0ae47451a4f412f83b97aa78ee3d1c6aeae880863e618fbbdc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.556998 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(a17b75feca0aeb0ae47451a4f412f83b97aa78ee3d1c6aeae880863e618fbbdc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:13 crc kubenswrapper[4740]: E0130 16:08:13.557039 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(a17b75feca0aeb0ae47451a4f412f83b97aa78ee3d1c6aeae880863e618fbbdc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" podUID="522756c7-f451-4879-b2b3-2d19b80cb751" Jan 30 16:08:14 crc kubenswrapper[4740]: I0130 16:08:14.354595 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:14 crc kubenswrapper[4740]: I0130 16:08:14.354649 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:14 crc kubenswrapper[4740]: I0130 16:08:14.431193 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:18 crc kubenswrapper[4740]: I0130 16:08:18.335104 4740 scope.go:117] "RemoveContainer" containerID="4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c" Jan 30 16:08:18 crc kubenswrapper[4740]: E0130 16:08:18.336139 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pkzlw_openshift-multus(e65088cb-e700-4af1-b788-af399f918bd0)\"" pod="openshift-multus/multus-pkzlw" podUID="e65088cb-e700-4af1-b788-af399f918bd0" Jan 30 16:08:24 crc kubenswrapper[4740]: I0130 16:08:24.334757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:24 crc kubenswrapper[4740]: I0130 16:08:24.335650 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:24 crc kubenswrapper[4740]: E0130 16:08:24.370024 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(c537ae7a51f3046347efcae5f9ae48dcebfec4f6dbaebd769c13f80f0714852a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:24 crc kubenswrapper[4740]: E0130 16:08:24.370139 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(c537ae7a51f3046347efcae5f9ae48dcebfec4f6dbaebd769c13f80f0714852a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:24 crc kubenswrapper[4740]: E0130 16:08:24.370174 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(c537ae7a51f3046347efcae5f9ae48dcebfec4f6dbaebd769c13f80f0714852a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:24 crc kubenswrapper[4740]: E0130 16:08:24.370241 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-pdgvg_openshift-operators(6a0acde2-70b4-4622-a609-290cbc5f253f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-pdgvg_openshift-operators_6a0acde2-70b4-4622-a609-290cbc5f253f_0(c537ae7a51f3046347efcae5f9ae48dcebfec4f6dbaebd769c13f80f0714852a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" podUID="6a0acde2-70b4-4622-a609-290cbc5f253f" Jan 30 16:08:24 crc kubenswrapper[4740]: I0130 16:08:24.455336 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:08:24 crc kubenswrapper[4740]: I0130 16:08:24.455468 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:08:25 crc kubenswrapper[4740]: I0130 16:08:25.334537 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:25 crc kubenswrapper[4740]: I0130 16:08:25.335634 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:25 crc kubenswrapper[4740]: E0130 16:08:25.366165 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(c7389b4ecfa32bc768048a608106e77068d3948c3b5b6e845a828dc5259cb525): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:25 crc kubenswrapper[4740]: E0130 16:08:25.366252 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(c7389b4ecfa32bc768048a608106e77068d3948c3b5b6e845a828dc5259cb525): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:25 crc kubenswrapper[4740]: E0130 16:08:25.366283 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(c7389b4ecfa32bc768048a608106e77068d3948c3b5b6e845a828dc5259cb525): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:25 crc kubenswrapper[4740]: E0130 16:08:25.366358 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-r2zbm_openshift-operators(522756c7-f451-4879-b2b3-2d19b80cb751)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-r2zbm_openshift-operators_522756c7-f451-4879-b2b3-2d19b80cb751_0(c7389b4ecfa32bc768048a608106e77068d3948c3b5b6e845a828dc5259cb525): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" podUID="522756c7-f451-4879-b2b3-2d19b80cb751" Jan 30 16:08:26 crc kubenswrapper[4740]: I0130 16:08:26.334657 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:26 crc kubenswrapper[4740]: I0130 16:08:26.334758 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:26 crc kubenswrapper[4740]: I0130 16:08:26.335522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:26 crc kubenswrapper[4740]: I0130 16:08:26.335674 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.398892 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(4c9f05a230f6d7570a855f92113f32ca293d25aa11b9aa6e25057ac1f34d3a45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.398967 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(4c9f05a230f6d7570a855f92113f32ca293d25aa11b9aa6e25057ac1f34d3a45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.398996 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(4c9f05a230f6d7570a855f92113f32ca293d25aa11b9aa6e25057ac1f34d3a45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.399042 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators(27f815e6-2917-46af-8a6d-4bcd66c35042)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_openshift-operators_27f815e6-2917-46af-8a6d-4bcd66c35042_0(4c9f05a230f6d7570a855f92113f32ca293d25aa11b9aa6e25057ac1f34d3a45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" podUID="27f815e6-2917-46af-8a6d-4bcd66c35042" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.404844 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(73d3f8e6f8e36ce5c009411f3ef291132c0558b6b804195c234006eed0b28053): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.404915 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(73d3f8e6f8e36ce5c009411f3ef291132c0558b6b804195c234006eed0b28053): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.404938 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(73d3f8e6f8e36ce5c009411f3ef291132c0558b6b804195c234006eed0b28053): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:26 crc kubenswrapper[4740]: E0130 16:08:26.404985 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators(7fed297b-1b60-4fa1-81ad-f7aff661624d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-fl52v_openshift-operators_7fed297b-1b60-4fa1-81ad-f7aff661624d_0(73d3f8e6f8e36ce5c009411f3ef291132c0558b6b804195c234006eed0b28053): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" podUID="7fed297b-1b60-4fa1-81ad-f7aff661624d" Jan 30 16:08:28 crc kubenswrapper[4740]: I0130 16:08:28.334755 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:28 crc kubenswrapper[4740]: I0130 16:08:28.335517 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:28 crc kubenswrapper[4740]: E0130 16:08:28.365274 4740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(cdc526fece0b47304215374bf45a1a762883f92fc3f5cb09b46a82470d6a7a27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:08:28 crc kubenswrapper[4740]: E0130 16:08:28.365381 4740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(cdc526fece0b47304215374bf45a1a762883f92fc3f5cb09b46a82470d6a7a27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:28 crc kubenswrapper[4740]: E0130 16:08:28.365406 4740 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(cdc526fece0b47304215374bf45a1a762883f92fc3f5cb09b46a82470d6a7a27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:28 crc kubenswrapper[4740]: E0130 16:08:28.365451 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators(e70968d1-7497-4724-9c80-cf5abdf288ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_openshift-operators_e70968d1-7497-4724-9c80-cf5abdf288ea_0(cdc526fece0b47304215374bf45a1a762883f92fc3f5cb09b46a82470d6a7a27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" podUID="e70968d1-7497-4724-9c80-cf5abdf288ea" Jan 30 16:08:33 crc kubenswrapper[4740]: I0130 16:08:33.338320 4740 scope.go:117] "RemoveContainer" containerID="4deaee5491574ddce3f8b6266f274aca00c442e1961910366d9aca5c00715c3c" Jan 30 16:08:34 crc kubenswrapper[4740]: I0130 16:08:34.485315 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pkzlw_e65088cb-e700-4af1-b788-af399f918bd0/kube-multus/2.log" Jan 30 16:08:34 crc kubenswrapper[4740]: I0130 16:08:34.485826 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pkzlw" event={"ID":"e65088cb-e700-4af1-b788-af399f918bd0","Type":"ContainerStarted","Data":"be4748d2e6a5c7825ad26a93cadf82099169746739ccd7ac2c9d0a8007dac787"} Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.334581 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.334667 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.335844 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.335846 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.386918 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zkjpr" Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.627007 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2zbm"] Jan 30 16:08:36 crc kubenswrapper[4740]: W0130 16:08:36.633253 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod522756c7_f451_4879_b2b3_2d19b80cb751.slice/crio-e8aecbf44e04c4a149539df105e5df46afc2082aa6c0787b379480abb171cf7d WatchSource:0}: Error finding container e8aecbf44e04c4a149539df105e5df46afc2082aa6c0787b379480abb171cf7d: Status 404 returned error can't find the container with id e8aecbf44e04c4a149539df105e5df46afc2082aa6c0787b379480abb171cf7d Jan 30 16:08:36 crc kubenswrapper[4740]: I0130 16:08:36.669583 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-pdgvg"] Jan 30 16:08:36 crc kubenswrapper[4740]: W0130 16:08:36.672695 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a0acde2_70b4_4622_a609_290cbc5f253f.slice/crio-835d6c7c8ec1077beb91b0ba516d160e44176482093240bd620812723e9534c5 WatchSource:0}: Error finding container 835d6c7c8ec1077beb91b0ba516d160e44176482093240bd620812723e9534c5: Status 404 returned error can't find the container with id 835d6c7c8ec1077beb91b0ba516d160e44176482093240bd620812723e9534c5 Jan 30 16:08:37 crc kubenswrapper[4740]: I0130 16:08:37.517723 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" event={"ID":"522756c7-f451-4879-b2b3-2d19b80cb751","Type":"ContainerStarted","Data":"e8aecbf44e04c4a149539df105e5df46afc2082aa6c0787b379480abb171cf7d"} Jan 30 16:08:37 crc kubenswrapper[4740]: I0130 16:08:37.518609 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" event={"ID":"6a0acde2-70b4-4622-a609-290cbc5f253f","Type":"ContainerStarted","Data":"835d6c7c8ec1077beb91b0ba516d160e44176482093240bd620812723e9534c5"} Jan 30 16:08:38 crc kubenswrapper[4740]: I0130 16:08:38.335223 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:38 crc kubenswrapper[4740]: I0130 16:08:38.335960 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" Jan 30 16:08:38 crc kubenswrapper[4740]: I0130 16:08:38.786078 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h"] Jan 30 16:08:38 crc kubenswrapper[4740]: W0130 16:08:38.795968 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f815e6_2917_46af_8a6d_4bcd66c35042.slice/crio-9272a668bf8b70107d0f0c826aab5cc2fcfdb0cb5187bd400d2a3f54f2369451 WatchSource:0}: Error finding container 9272a668bf8b70107d0f0c826aab5cc2fcfdb0cb5187bd400d2a3f54f2369451: Status 404 returned error can't find the container with id 9272a668bf8b70107d0f0c826aab5cc2fcfdb0cb5187bd400d2a3f54f2369451 Jan 30 16:08:39 crc kubenswrapper[4740]: I0130 16:08:39.543306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" event={"ID":"27f815e6-2917-46af-8a6d-4bcd66c35042","Type":"ContainerStarted","Data":"9272a668bf8b70107d0f0c826aab5cc2fcfdb0cb5187bd400d2a3f54f2369451"} Jan 30 16:08:40 crc kubenswrapper[4740]: I0130 16:08:40.334564 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:40 crc kubenswrapper[4740]: I0130 16:08:40.335458 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" Jan 30 16:08:41 crc kubenswrapper[4740]: I0130 16:08:41.335607 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:41 crc kubenswrapper[4740]: I0130 16:08:41.336238 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.489934 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v"] Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.516253 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b"] Jan 30 16:08:44 crc kubenswrapper[4740]: W0130 16:08:44.522678 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode70968d1_7497_4724_9c80_cf5abdf288ea.slice/crio-4e895fc00cabd47caa69ada2b20b5885210160b8a3349b9ccf683c67322de7b2 WatchSource:0}: Error finding container 4e895fc00cabd47caa69ada2b20b5885210160b8a3349b9ccf683c67322de7b2: Status 404 returned error can't find the container with id 4e895fc00cabd47caa69ada2b20b5885210160b8a3349b9ccf683c67322de7b2 Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.614456 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" event={"ID":"6a0acde2-70b4-4622-a609-290cbc5f253f","Type":"ContainerStarted","Data":"33efc95cd0ccfe468290e30e1bd65958b556c8989ec9db898a0c7d3a95fc215e"} Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.615213 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.616948 4740 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-pdgvg container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" start-of-body= Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.617062 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" podUID="6a0acde2-70b4-4622-a609-290cbc5f253f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": dial tcp 10.217.0.7:8081: connect: connection refused" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.621840 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" event={"ID":"522756c7-f451-4879-b2b3-2d19b80cb751","Type":"ContainerStarted","Data":"c99f1511c6ba2e144baed9b6c4c81ffcb03d94ebd0e6196c868f68c9dc55e465"} Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.622178 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.623793 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" event={"ID":"7fed297b-1b60-4fa1-81ad-f7aff661624d","Type":"ContainerStarted","Data":"81dc1d00705cd5306a23d2551f1999cf6f8839507aab071ca8234667bd904f6a"} Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.625170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" event={"ID":"e70968d1-7497-4724-9c80-cf5abdf288ea","Type":"ContainerStarted","Data":"4e895fc00cabd47caa69ada2b20b5885210160b8a3349b9ccf683c67322de7b2"} Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.639602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" podStartSLOduration=31.085624483 podStartE2EDuration="38.639561972s" podCreationTimestamp="2026-01-30 16:08:06 +0000 UTC" firstStartedPulling="2026-01-30 16:08:36.675925801 +0000 UTC m=+765.312988400" lastFinishedPulling="2026-01-30 16:08:44.22986329 +0000 UTC m=+772.866925889" observedRunningTime="2026-01-30 16:08:44.632641148 +0000 UTC m=+773.269703747" watchObservedRunningTime="2026-01-30 16:08:44.639561972 +0000 UTC m=+773.276624561" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.653459 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" podStartSLOduration=33.042968057 podStartE2EDuration="38.65342869s" podCreationTimestamp="2026-01-30 16:08:06 +0000 UTC" firstStartedPulling="2026-01-30 16:08:38.79899415 +0000 UTC m=+767.436056739" lastFinishedPulling="2026-01-30 16:08:44.409454763 +0000 UTC m=+773.046517372" observedRunningTime="2026-01-30 16:08:44.652376503 +0000 UTC m=+773.289439102" watchObservedRunningTime="2026-01-30 16:08:44.65342869 +0000 UTC m=+773.290491289" Jan 30 16:08:44 crc kubenswrapper[4740]: I0130 16:08:44.677574 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" podStartSLOduration=31.081108719 podStartE2EDuration="38.677546784s" podCreationTimestamp="2026-01-30 16:08:06 +0000 UTC" firstStartedPulling="2026-01-30 16:08:36.636245216 +0000 UTC m=+765.273307815" lastFinishedPulling="2026-01-30 16:08:44.232683281 +0000 UTC m=+772.869745880" observedRunningTime="2026-01-30 16:08:44.675656007 +0000 UTC m=+773.312718606" watchObservedRunningTime="2026-01-30 16:08:44.677546784 +0000 UTC m=+773.314609393" Jan 30 16:08:45 crc kubenswrapper[4740]: I0130 16:08:45.640591 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-pd76h" event={"ID":"27f815e6-2917-46af-8a6d-4bcd66c35042","Type":"ContainerStarted","Data":"f4696d50fa1484d312773fe5814621c338c7c41d9e3085f4b62be528f147992f"} Jan 30 16:08:45 crc kubenswrapper[4740]: I0130 16:08:45.643178 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" event={"ID":"e70968d1-7497-4724-9c80-cf5abdf288ea","Type":"ContainerStarted","Data":"bd29b017d814d74273d150ba2444101c9d7e23d7988bfd310970bb9c1f30370e"} Jan 30 16:08:45 crc kubenswrapper[4740]: I0130 16:08:45.645274 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-pdgvg" Jan 30 16:08:45 crc kubenswrapper[4740]: I0130 16:08:45.673662 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b" podStartSLOduration=39.673624906 podStartE2EDuration="39.673624906s" podCreationTimestamp="2026-01-30 16:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:08:45.667703558 +0000 UTC m=+774.304766167" watchObservedRunningTime="2026-01-30 16:08:45.673624906 +0000 UTC m=+774.310687515" Jan 30 16:08:46 crc kubenswrapper[4740]: I0130 16:08:46.253615 4740 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:08:48 crc kubenswrapper[4740]: I0130 16:08:48.663160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" event={"ID":"7fed297b-1b60-4fa1-81ad-f7aff661624d","Type":"ContainerStarted","Data":"628dd493f93f2678c4ba540945463b408619922c904e9206ea4abe168dcebc84"} Jan 30 16:08:48 crc kubenswrapper[4740]: I0130 16:08:48.688063 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fl52v" podStartSLOduration=40.933599545999996 podStartE2EDuration="43.688045953s" podCreationTimestamp="2026-01-30 16:08:05 +0000 UTC" firstStartedPulling="2026-01-30 16:08:44.511094861 +0000 UTC m=+773.148157460" lastFinishedPulling="2026-01-30 16:08:47.265541268 +0000 UTC m=+775.902603867" observedRunningTime="2026-01-30 16:08:48.681852428 +0000 UTC m=+777.318915027" watchObservedRunningTime="2026-01-30 16:08:48.688045953 +0000 UTC m=+777.325108552" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.328466 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dg58"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.330234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.335471 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ltmsm" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.335515 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.336048 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.344124 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-4xwlh"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.347402 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-4xwlh" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.368924 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2chx8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.369203 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dg58"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.380476 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-4xwlh"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.388563 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mn7d8"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.389385 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.392795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-flpgt" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.405444 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mn7d8"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.495211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchp8\" (UniqueName: \"kubernetes.io/projected/38135331-191d-4ef6-a002-936b6b4a17b3-kube-api-access-xchp8\") pod \"cert-manager-858654f9db-4xwlh\" (UID: \"38135331-191d-4ef6-a002-936b6b4a17b3\") " pod="cert-manager/cert-manager-858654f9db-4xwlh" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.495380 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx7sv\" (UniqueName: \"kubernetes.io/projected/72764858-c1a4-408a-887a-c48ad0b4d10a-kube-api-access-qx7sv\") pod \"cert-manager-cainjector-cf98fcc89-7dg58\" (UID: \"72764858-c1a4-408a-887a-c48ad0b4d10a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.496444 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vjf6\" (UniqueName: \"kubernetes.io/projected/3979d983-a849-4be3-a862-caed0065a705-kube-api-access-6vjf6\") pod \"cert-manager-webhook-687f57d79b-mn7d8\" (UID: \"3979d983-a849-4be3-a862-caed0065a705\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.597660 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx7sv\" (UniqueName: \"kubernetes.io/projected/72764858-c1a4-408a-887a-c48ad0b4d10a-kube-api-access-qx7sv\") pod \"cert-manager-cainjector-cf98fcc89-7dg58\" (UID: \"72764858-c1a4-408a-887a-c48ad0b4d10a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.597735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vjf6\" (UniqueName: \"kubernetes.io/projected/3979d983-a849-4be3-a862-caed0065a705-kube-api-access-6vjf6\") pod \"cert-manager-webhook-687f57d79b-mn7d8\" (UID: \"3979d983-a849-4be3-a862-caed0065a705\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.597778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchp8\" (UniqueName: \"kubernetes.io/projected/38135331-191d-4ef6-a002-936b6b4a17b3-kube-api-access-xchp8\") pod \"cert-manager-858654f9db-4xwlh\" (UID: \"38135331-191d-4ef6-a002-936b6b4a17b3\") " pod="cert-manager/cert-manager-858654f9db-4xwlh" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.620823 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vjf6\" (UniqueName: \"kubernetes.io/projected/3979d983-a849-4be3-a862-caed0065a705-kube-api-access-6vjf6\") pod \"cert-manager-webhook-687f57d79b-mn7d8\" (UID: \"3979d983-a849-4be3-a862-caed0065a705\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.620893 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchp8\" (UniqueName: \"kubernetes.io/projected/38135331-191d-4ef6-a002-936b6b4a17b3-kube-api-access-xchp8\") pod \"cert-manager-858654f9db-4xwlh\" (UID: \"38135331-191d-4ef6-a002-936b6b4a17b3\") " pod="cert-manager/cert-manager-858654f9db-4xwlh" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.626379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx7sv\" (UniqueName: \"kubernetes.io/projected/72764858-c1a4-408a-887a-c48ad0b4d10a-kube-api-access-qx7sv\") pod \"cert-manager-cainjector-cf98fcc89-7dg58\" (UID: \"72764858-c1a4-408a-887a-c48ad0b4d10a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.651264 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.670393 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-4xwlh" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.707064 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.900330 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7dg58"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.959579 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-4xwlh"] Jan 30 16:08:52 crc kubenswrapper[4740]: I0130 16:08:52.993243 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mn7d8"] Jan 30 16:08:52 crc kubenswrapper[4740]: W0130 16:08:52.996955 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3979d983_a849_4be3_a862_caed0065a705.slice/crio-ad0d8f2a59aa0cca44451a0c432049d93126f3875ccb3f3a6fd3dedf21b991f5 WatchSource:0}: Error finding container ad0d8f2a59aa0cca44451a0c432049d93126f3875ccb3f3a6fd3dedf21b991f5: Status 404 returned error can't find the container with id ad0d8f2a59aa0cca44451a0c432049d93126f3875ccb3f3a6fd3dedf21b991f5 Jan 30 16:08:53 crc kubenswrapper[4740]: I0130 16:08:53.699113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-4xwlh" event={"ID":"38135331-191d-4ef6-a002-936b6b4a17b3","Type":"ContainerStarted","Data":"b340784b46e6c35f13553172e19cb3b7ab4f4265f01af8c1456ad99a1a024cad"} Jan 30 16:08:53 crc kubenswrapper[4740]: I0130 16:08:53.702026 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" event={"ID":"72764858-c1a4-408a-887a-c48ad0b4d10a","Type":"ContainerStarted","Data":"13b0bdab2808141d96fc4f1a83ca1f732df56a9b82e5180ec2b910524dc33ee7"} Jan 30 16:08:53 crc kubenswrapper[4740]: I0130 16:08:53.703706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" event={"ID":"3979d983-a849-4be3-a862-caed0065a705","Type":"ContainerStarted","Data":"ad0d8f2a59aa0cca44451a0c432049d93126f3875ccb3f3a6fd3dedf21b991f5"} Jan 30 16:08:54 crc kubenswrapper[4740]: I0130 16:08:54.455770 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:08:54 crc kubenswrapper[4740]: I0130 16:08:54.456272 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:08:56 crc kubenswrapper[4740]: I0130 16:08:56.808431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-r2zbm" Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.738019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-4xwlh" event={"ID":"38135331-191d-4ef6-a002-936b6b4a17b3","Type":"ContainerStarted","Data":"05af0b3e88d9852c1ee19ca76287c8ac8f9c399527270df202b439df955b9454"} Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.740183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" event={"ID":"72764858-c1a4-408a-887a-c48ad0b4d10a","Type":"ContainerStarted","Data":"9d81a3688e3cfcbe6e55be7c7e827ea5874d127da7a46cd46712cc2b886a717a"} Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.742509 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" event={"ID":"3979d983-a849-4be3-a862-caed0065a705","Type":"ContainerStarted","Data":"e3c2f83d6a74c82518367e388afb80f3134232b70880e6cdae650170dc4aa6d2"} Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.743045 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.795966 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7dg58" podStartSLOduration=1.762421174 podStartE2EDuration="5.79594014s" podCreationTimestamp="2026-01-30 16:08:52 +0000 UTC" firstStartedPulling="2026-01-30 16:08:52.918076205 +0000 UTC m=+781.555138794" lastFinishedPulling="2026-01-30 16:08:56.951595151 +0000 UTC m=+785.588657760" observedRunningTime="2026-01-30 16:08:57.790162425 +0000 UTC m=+786.427225024" watchObservedRunningTime="2026-01-30 16:08:57.79594014 +0000 UTC m=+786.433002729" Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.797275 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-4xwlh" podStartSLOduration=1.807841193 podStartE2EDuration="5.797263323s" podCreationTimestamp="2026-01-30 16:08:52 +0000 UTC" firstStartedPulling="2026-01-30 16:08:52.962777196 +0000 UTC m=+781.599839785" lastFinishedPulling="2026-01-30 16:08:56.952199316 +0000 UTC m=+785.589261915" observedRunningTime="2026-01-30 16:08:57.764469431 +0000 UTC m=+786.401532030" watchObservedRunningTime="2026-01-30 16:08:57.797263323 +0000 UTC m=+786.434325932" Jan 30 16:08:57 crc kubenswrapper[4740]: I0130 16:08:57.814045 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" podStartSLOduration=1.780973199 podStartE2EDuration="5.814017503s" podCreationTimestamp="2026-01-30 16:08:52 +0000 UTC" firstStartedPulling="2026-01-30 16:08:52.999803354 +0000 UTC m=+781.636865943" lastFinishedPulling="2026-01-30 16:08:57.032847648 +0000 UTC m=+785.669910247" observedRunningTime="2026-01-30 16:08:57.810612978 +0000 UTC m=+786.447675587" watchObservedRunningTime="2026-01-30 16:08:57.814017503 +0000 UTC m=+786.451080102" Jan 30 16:09:02 crc kubenswrapper[4740]: I0130 16:09:02.710812 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mn7d8" Jan 30 16:09:24 crc kubenswrapper[4740]: I0130 16:09:24.455193 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:09:24 crc kubenswrapper[4740]: I0130 16:09:24.456715 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:09:24 crc kubenswrapper[4740]: I0130 16:09:24.456903 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:09:24 crc kubenswrapper[4740]: I0130 16:09:24.458464 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:09:24 crc kubenswrapper[4740]: I0130 16:09:24.458613 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865" gracePeriod=600 Jan 30 16:09:24 crc kubenswrapper[4740]: E0130 16:09:24.609505 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139658c1_36a2_4af9_bdfd_2bc3f9e6dcc9.slice/crio-d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.347968 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f"] Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.352082 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.356547 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.360200 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f"] Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.449508 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.449829 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.449911 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsk4\" (UniqueName: \"kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.552013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.552068 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsk4\" (UniqueName: \"kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.552127 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.552644 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.552991 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.586795 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsk4\" (UniqueName: \"kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4\") pod \"3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.706100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.962303 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865" exitCode=0 Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.962733 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865"} Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.962770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a"} Jan 30 16:09:26 crc kubenswrapper[4740]: I0130 16:09:26.962794 4740 scope.go:117] "RemoveContainer" containerID="7e1f561b758eb69b53697bf5389d7cefba5c1bd9781fba3c05bad5a8566f9531" Jan 30 16:09:27 crc kubenswrapper[4740]: I0130 16:09:27.257580 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f"] Jan 30 16:09:27 crc kubenswrapper[4740]: I0130 16:09:27.973079 4740 generic.go:334] "Generic (PLEG): container finished" podID="1dd39293-3572-4079-8f91-9f6549e8304d" containerID="c74f0944430c307bced80d94b3808722f1f1d3f5d818690e17e548ba28342c51" exitCode=0 Jan 30 16:09:27 crc kubenswrapper[4740]: I0130 16:09:27.973178 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" event={"ID":"1dd39293-3572-4079-8f91-9f6549e8304d","Type":"ContainerDied","Data":"c74f0944430c307bced80d94b3808722f1f1d3f5d818690e17e548ba28342c51"} Jan 30 16:09:27 crc kubenswrapper[4740]: I0130 16:09:27.973235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" event={"ID":"1dd39293-3572-4079-8f91-9f6549e8304d","Type":"ContainerStarted","Data":"5a589cec16e98621d784d59a471a7f82c86a113a6e3e328d0c223b1332aa7e7a"} Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.261437 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.262830 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.266410 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.266409 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.266624 4740 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-8wknj" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.323214 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.382729 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.382808 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf82j\" (UniqueName: \"kubernetes.io/projected/a19b5135-c4e2-4483-b6d4-cdb38716c1b6-kube-api-access-wf82j\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.484122 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.484196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf82j\" (UniqueName: \"kubernetes.io/projected/a19b5135-c4e2-4483-b6d4-cdb38716c1b6-kube-api-access-wf82j\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.489146 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.489184 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/464091723f5225bd64eb06785246438db5d60eb80f0fde99a7ad7e3970fd3365/globalmount\"" pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.509493 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf82j\" (UniqueName: \"kubernetes.io/projected/a19b5135-c4e2-4483-b6d4-cdb38716c1b6-kube-api-access-wf82j\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.517913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c58cf70c-6795-4905-a8b6-15fc8c0448b2\") pod \"minio\" (UID: \"a19b5135-c4e2-4483-b6d4-cdb38716c1b6\") " pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.629214 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.696380 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.697597 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.713149 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.788468 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.788569 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvjk\" (UniqueName: \"kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.788619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.891215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvjk\" (UniqueName: \"kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.891291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.891365 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.891931 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.892375 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:28 crc kubenswrapper[4740]: I0130 16:09:28.921277 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvjk\" (UniqueName: \"kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk\") pod \"redhat-operators-lm45s\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:29 crc kubenswrapper[4740]: I0130 16:09:29.037323 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:29 crc kubenswrapper[4740]: I0130 16:09:29.245401 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 30 16:09:29 crc kubenswrapper[4740]: I0130 16:09:29.290757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:29 crc kubenswrapper[4740]: W0130 16:09:29.298323 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19b5135_c4e2_4483_b6d4_cdb38716c1b6.slice/crio-db2632b655d9a48886b507ac97b79241f00692353c8c9d1bb16f8c6307cd41b8 WatchSource:0}: Error finding container db2632b655d9a48886b507ac97b79241f00692353c8c9d1bb16f8c6307cd41b8: Status 404 returned error can't find the container with id db2632b655d9a48886b507ac97b79241f00692353c8c9d1bb16f8c6307cd41b8 Jan 30 16:09:29 crc kubenswrapper[4740]: W0130 16:09:29.299093 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b842006_df66_4855_9569_7bc0f493a6c9.slice/crio-2f992102e44d585c37d3690b46cb4916dd8ade97d05a990d1a56075b55ddc45a WatchSource:0}: Error finding container 2f992102e44d585c37d3690b46cb4916dd8ade97d05a990d1a56075b55ddc45a: Status 404 returned error can't find the container with id 2f992102e44d585c37d3690b46cb4916dd8ade97d05a990d1a56075b55ddc45a Jan 30 16:09:29 crc kubenswrapper[4740]: I0130 16:09:29.996290 4740 generic.go:334] "Generic (PLEG): container finished" podID="1dd39293-3572-4079-8f91-9f6549e8304d" containerID="c372a3cf61d2148d0b9f7bd3bf74e015419fcac32a4413522cb7dd697711d3bf" exitCode=0 Jan 30 16:09:29 crc kubenswrapper[4740]: I0130 16:09:29.996907 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" event={"ID":"1dd39293-3572-4079-8f91-9f6549e8304d","Type":"ContainerDied","Data":"c372a3cf61d2148d0b9f7bd3bf74e015419fcac32a4413522cb7dd697711d3bf"} Jan 30 16:09:30 crc kubenswrapper[4740]: I0130 16:09:30.007098 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"a19b5135-c4e2-4483-b6d4-cdb38716c1b6","Type":"ContainerStarted","Data":"db2632b655d9a48886b507ac97b79241f00692353c8c9d1bb16f8c6307cd41b8"} Jan 30 16:09:30 crc kubenswrapper[4740]: I0130 16:09:30.013531 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b842006-df66-4855-9569-7bc0f493a6c9" containerID="a72dd55161adda8cf04372815594ffc7fb570e09f7cbfb2f22236657f2b87eab" exitCode=0 Jan 30 16:09:30 crc kubenswrapper[4740]: I0130 16:09:30.013605 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerDied","Data":"a72dd55161adda8cf04372815594ffc7fb570e09f7cbfb2f22236657f2b87eab"} Jan 30 16:09:30 crc kubenswrapper[4740]: I0130 16:09:30.013646 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerStarted","Data":"2f992102e44d585c37d3690b46cb4916dd8ade97d05a990d1a56075b55ddc45a"} Jan 30 16:09:31 crc kubenswrapper[4740]: I0130 16:09:31.025100 4740 generic.go:334] "Generic (PLEG): container finished" podID="1dd39293-3572-4079-8f91-9f6549e8304d" containerID="6acd935912c54c6ce9a51c594a2ce17836a8af7268872474610a188a178ef05a" exitCode=0 Jan 30 16:09:31 crc kubenswrapper[4740]: I0130 16:09:31.025231 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" event={"ID":"1dd39293-3572-4079-8f91-9f6549e8304d","Type":"ContainerDied","Data":"6acd935912c54c6ce9a51c594a2ce17836a8af7268872474610a188a178ef05a"} Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.465258 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.555979 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsk4\" (UniqueName: \"kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4\") pod \"1dd39293-3572-4079-8f91-9f6549e8304d\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.556139 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle\") pod \"1dd39293-3572-4079-8f91-9f6549e8304d\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.556191 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util\") pod \"1dd39293-3572-4079-8f91-9f6549e8304d\" (UID: \"1dd39293-3572-4079-8f91-9f6549e8304d\") " Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.558872 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle" (OuterVolumeSpecName: "bundle") pod "1dd39293-3572-4079-8f91-9f6549e8304d" (UID: "1dd39293-3572-4079-8f91-9f6549e8304d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.564556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4" (OuterVolumeSpecName: "kube-api-access-qpsk4") pod "1dd39293-3572-4079-8f91-9f6549e8304d" (UID: "1dd39293-3572-4079-8f91-9f6549e8304d"). InnerVolumeSpecName "kube-api-access-qpsk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.575865 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util" (OuterVolumeSpecName: "util") pod "1dd39293-3572-4079-8f91-9f6549e8304d" (UID: "1dd39293-3572-4079-8f91-9f6549e8304d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.658051 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.658087 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1dd39293-3572-4079-8f91-9f6549e8304d-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:32 crc kubenswrapper[4740]: I0130 16:09:32.658097 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsk4\" (UniqueName: \"kubernetes.io/projected/1dd39293-3572-4079-8f91-9f6549e8304d-kube-api-access-qpsk4\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:33 crc kubenswrapper[4740]: I0130 16:09:33.053905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" event={"ID":"1dd39293-3572-4079-8f91-9f6549e8304d","Type":"ContainerDied","Data":"5a589cec16e98621d784d59a471a7f82c86a113a6e3e328d0c223b1332aa7e7a"} Jan 30 16:09:33 crc kubenswrapper[4740]: I0130 16:09:33.054334 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a589cec16e98621d784d59a471a7f82c86a113a6e3e328d0c223b1332aa7e7a" Jan 30 16:09:33 crc kubenswrapper[4740]: I0130 16:09:33.054086 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f" Jan 30 16:09:34 crc kubenswrapper[4740]: I0130 16:09:34.063380 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"a19b5135-c4e2-4483-b6d4-cdb38716c1b6","Type":"ContainerStarted","Data":"73414e51fc7b4961a3d9704893d28708c379b45fb42ccec25d706911c67100ea"} Jan 30 16:09:34 crc kubenswrapper[4740]: I0130 16:09:34.065806 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerStarted","Data":"e4d0024d1ae0ccd70e4400ba74b8c656c8db8c7aee78896a66587efe7a9843c9"} Jan 30 16:09:34 crc kubenswrapper[4740]: I0130 16:09:34.117679 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.15691753 podStartE2EDuration="9.117657261s" podCreationTimestamp="2026-01-30 16:09:25 +0000 UTC" firstStartedPulling="2026-01-30 16:09:29.301234817 +0000 UTC m=+817.938297416" lastFinishedPulling="2026-01-30 16:09:33.261974548 +0000 UTC m=+821.899037147" observedRunningTime="2026-01-30 16:09:34.095812954 +0000 UTC m=+822.732875543" watchObservedRunningTime="2026-01-30 16:09:34.117657261 +0000 UTC m=+822.754719850" Jan 30 16:09:35 crc kubenswrapper[4740]: I0130 16:09:35.074623 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b842006-df66-4855-9569-7bc0f493a6c9" containerID="e4d0024d1ae0ccd70e4400ba74b8c656c8db8c7aee78896a66587efe7a9843c9" exitCode=0 Jan 30 16:09:35 crc kubenswrapper[4740]: I0130 16:09:35.074689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerDied","Data":"e4d0024d1ae0ccd70e4400ba74b8c656c8db8c7aee78896a66587efe7a9843c9"} Jan 30 16:09:36 crc kubenswrapper[4740]: I0130 16:09:36.085197 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerStarted","Data":"dc2858abc04f75e85b5816390fb6ab201de63626b7d5267f80e0ee7108840765"} Jan 30 16:09:36 crc kubenswrapper[4740]: I0130 16:09:36.118791 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lm45s" podStartSLOduration=2.657827218 podStartE2EDuration="8.118766631s" podCreationTimestamp="2026-01-30 16:09:28 +0000 UTC" firstStartedPulling="2026-01-30 16:09:30.048940513 +0000 UTC m=+818.686003112" lastFinishedPulling="2026-01-30 16:09:35.509879926 +0000 UTC m=+824.146942525" observedRunningTime="2026-01-30 16:09:36.114532035 +0000 UTC m=+824.751594654" watchObservedRunningTime="2026-01-30 16:09:36.118766631 +0000 UTC m=+824.755829250" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.616625 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n"] Jan 30 16:09:38 crc kubenswrapper[4740]: E0130 16:09:38.617273 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="util" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.617288 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="util" Jan 30 16:09:38 crc kubenswrapper[4740]: E0130 16:09:38.617303 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="pull" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.617312 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="pull" Jan 30 16:09:38 crc kubenswrapper[4740]: E0130 16:09:38.617331 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="extract" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.617339 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="extract" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.617470 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd39293-3572-4079-8f91-9f6549e8304d" containerName="extract" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.618137 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.620744 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.620940 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.620995 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.621736 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.621753 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.622557 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-dnvqf" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.637608 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n"] Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.640714 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-apiservice-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.640816 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pgkp\" (UniqueName: \"kubernetes.io/projected/05857b7d-f148-447a-96bb-d9846ef7402c-kube-api-access-2pgkp\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.640897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/05857b7d-f148-447a-96bb-d9846ef7402c-manager-config\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.640973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-webhook-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.641040 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.742015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.742074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-apiservice-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.742128 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pgkp\" (UniqueName: \"kubernetes.io/projected/05857b7d-f148-447a-96bb-d9846ef7402c-kube-api-access-2pgkp\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.742168 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/05857b7d-f148-447a-96bb-d9846ef7402c-manager-config\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.742189 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-webhook-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.743925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/05857b7d-f148-447a-96bb-d9846ef7402c-manager-config\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.752236 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.752977 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-webhook-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.784937 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05857b7d-f148-447a-96bb-d9846ef7402c-apiservice-cert\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.790225 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pgkp\" (UniqueName: \"kubernetes.io/projected/05857b7d-f148-447a-96bb-d9846ef7402c-kube-api-access-2pgkp\") pod \"loki-operator-controller-manager-b8b44847-7889n\" (UID: \"05857b7d-f148-447a-96bb-d9846ef7402c\") " pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:38 crc kubenswrapper[4740]: I0130 16:09:38.939649 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:39 crc kubenswrapper[4740]: I0130 16:09:39.038278 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:39 crc kubenswrapper[4740]: I0130 16:09:39.038664 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:39 crc kubenswrapper[4740]: I0130 16:09:39.363810 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n"] Jan 30 16:09:40 crc kubenswrapper[4740]: I0130 16:09:40.113202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" event={"ID":"05857b7d-f148-447a-96bb-d9846ef7402c","Type":"ContainerStarted","Data":"f07580eb85ff8a4a8b575e27098ab1bd56b71d110e0866cf130e56de59502bed"} Jan 30 16:09:40 crc kubenswrapper[4740]: I0130 16:09:40.139098 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lm45s" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="registry-server" probeResult="failure" output=< Jan 30 16:09:40 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:09:40 crc kubenswrapper[4740]: > Jan 30 16:09:47 crc kubenswrapper[4740]: I0130 16:09:47.169548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" event={"ID":"05857b7d-f148-447a-96bb-d9846ef7402c","Type":"ContainerStarted","Data":"f8161af789174b7fe38ca2a2422335153560696e6ce1309ba2602fed2f33488b"} Jan 30 16:09:49 crc kubenswrapper[4740]: I0130 16:09:49.093556 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:49 crc kubenswrapper[4740]: I0130 16:09:49.147221 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:51 crc kubenswrapper[4740]: I0130 16:09:51.485758 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:51 crc kubenswrapper[4740]: I0130 16:09:51.486548 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lm45s" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="registry-server" containerID="cri-o://dc2858abc04f75e85b5816390fb6ab201de63626b7d5267f80e0ee7108840765" gracePeriod=2 Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.219266 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b842006-df66-4855-9569-7bc0f493a6c9" containerID="dc2858abc04f75e85b5816390fb6ab201de63626b7d5267f80e0ee7108840765" exitCode=0 Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.219342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerDied","Data":"dc2858abc04f75e85b5816390fb6ab201de63626b7d5267f80e0ee7108840765"} Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.584813 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.717822 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities\") pod \"9b842006-df66-4855-9569-7bc0f493a6c9\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.718288 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxvjk\" (UniqueName: \"kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk\") pod \"9b842006-df66-4855-9569-7bc0f493a6c9\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.718409 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content\") pod \"9b842006-df66-4855-9569-7bc0f493a6c9\" (UID: \"9b842006-df66-4855-9569-7bc0f493a6c9\") " Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.719128 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities" (OuterVolumeSpecName: "utilities") pod "9b842006-df66-4855-9569-7bc0f493a6c9" (UID: "9b842006-df66-4855-9569-7bc0f493a6c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.727329 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk" (OuterVolumeSpecName: "kube-api-access-jxvjk") pod "9b842006-df66-4855-9569-7bc0f493a6c9" (UID: "9b842006-df66-4855-9569-7bc0f493a6c9"). InnerVolumeSpecName "kube-api-access-jxvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.820960 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.821030 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxvjk\" (UniqueName: \"kubernetes.io/projected/9b842006-df66-4855-9569-7bc0f493a6c9-kube-api-access-jxvjk\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.834984 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b842006-df66-4855-9569-7bc0f493a6c9" (UID: "9b842006-df66-4855-9569-7bc0f493a6c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:09:52 crc kubenswrapper[4740]: I0130 16:09:52.922333 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b842006-df66-4855-9569-7bc0f493a6c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.228932 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" event={"ID":"05857b7d-f148-447a-96bb-d9846ef7402c","Type":"ContainerStarted","Data":"4a006a0f33050259de58f18abb56b8852a5c5213eb5f2862892a6ce1a677bbc1"} Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.229460 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.232377 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.232419 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lm45s" event={"ID":"9b842006-df66-4855-9569-7bc0f493a6c9","Type":"ContainerDied","Data":"2f992102e44d585c37d3690b46cb4916dd8ade97d05a990d1a56075b55ddc45a"} Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.232461 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lm45s" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.232459 4740 scope.go:117] "RemoveContainer" containerID="dc2858abc04f75e85b5816390fb6ab201de63626b7d5267f80e0ee7108840765" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.252089 4740 scope.go:117] "RemoveContainer" containerID="e4d0024d1ae0ccd70e4400ba74b8c656c8db8c7aee78896a66587efe7a9843c9" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.261455 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-b8b44847-7889n" podStartSLOduration=2.034271918 podStartE2EDuration="15.261414291s" podCreationTimestamp="2026-01-30 16:09:38 +0000 UTC" firstStartedPulling="2026-01-30 16:09:39.39297794 +0000 UTC m=+828.030040539" lastFinishedPulling="2026-01-30 16:09:52.620120323 +0000 UTC m=+841.257182912" observedRunningTime="2026-01-30 16:09:53.255448582 +0000 UTC m=+841.892511211" watchObservedRunningTime="2026-01-30 16:09:53.261414291 +0000 UTC m=+841.898476930" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.301234 4740 scope.go:117] "RemoveContainer" containerID="a72dd55161adda8cf04372815594ffc7fb570e09f7cbfb2f22236657f2b87eab" Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.320883 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.327487 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lm45s"] Jan 30 16:09:53 crc kubenswrapper[4740]: I0130 16:09:53.348401 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" path="/var/lib/kubelet/pods/9b842006-df66-4855-9569-7bc0f493a6c9/volumes" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.325106 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2"] Jan 30 16:10:35 crc kubenswrapper[4740]: E0130 16:10:35.326006 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="registry-server" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.326026 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="registry-server" Jan 30 16:10:35 crc kubenswrapper[4740]: E0130 16:10:35.326039 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="extract-content" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.326048 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="extract-content" Jan 30 16:10:35 crc kubenswrapper[4740]: E0130 16:10:35.326059 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="extract-utilities" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.326068 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="extract-utilities" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.326225 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b842006-df66-4855-9569-7bc0f493a6c9" containerName="registry-server" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.327297 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.329911 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.344375 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2"] Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.432693 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.432852 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.432892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28pm\" (UniqueName: \"kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.534308 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.534404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28pm\" (UniqueName: \"kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.534458 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.534948 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.535046 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.563872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28pm\" (UniqueName: \"kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:35 crc kubenswrapper[4740]: I0130 16:10:35.655995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:36 crc kubenswrapper[4740]: I0130 16:10:36.118534 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2"] Jan 30 16:10:36 crc kubenswrapper[4740]: I0130 16:10:36.564895 4740 generic.go:334] "Generic (PLEG): container finished" podID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerID="04dcab407e626acead4e6ce88dce705b9aa20c2a3751fa720f4b7051db3ce91c" exitCode=0 Jan 30 16:10:36 crc kubenswrapper[4740]: I0130 16:10:36.564978 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" event={"ID":"d14500ed-3452-479b-b86a-d000ba46cdc5","Type":"ContainerDied","Data":"04dcab407e626acead4e6ce88dce705b9aa20c2a3751fa720f4b7051db3ce91c"} Jan 30 16:10:36 crc kubenswrapper[4740]: I0130 16:10:36.565425 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" event={"ID":"d14500ed-3452-479b-b86a-d000ba46cdc5","Type":"ContainerStarted","Data":"100e0c324b14cace3e1224ed98bb01a63caa189a932c85059ee7046aacd13407"} Jan 30 16:10:38 crc kubenswrapper[4740]: I0130 16:10:38.588283 4740 generic.go:334] "Generic (PLEG): container finished" podID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerID="f62797339f8203bede72c1cf7b46794eb22010559afaff37127b6cbf7c76245b" exitCode=0 Jan 30 16:10:38 crc kubenswrapper[4740]: I0130 16:10:38.588467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" event={"ID":"d14500ed-3452-479b-b86a-d000ba46cdc5","Type":"ContainerDied","Data":"f62797339f8203bede72c1cf7b46794eb22010559afaff37127b6cbf7c76245b"} Jan 30 16:10:39 crc kubenswrapper[4740]: I0130 16:10:39.600074 4740 generic.go:334] "Generic (PLEG): container finished" podID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerID="fa8340115ad3a310a513728029a1d064ffc5cafbec649457a781f454ab322831" exitCode=0 Jan 30 16:10:39 crc kubenswrapper[4740]: I0130 16:10:39.600162 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" event={"ID":"d14500ed-3452-479b-b86a-d000ba46cdc5","Type":"ContainerDied","Data":"fa8340115ad3a310a513728029a1d064ffc5cafbec649457a781f454ab322831"} Jan 30 16:10:40 crc kubenswrapper[4740]: I0130 16:10:40.857284 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.051537 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle\") pod \"d14500ed-3452-479b-b86a-d000ba46cdc5\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.051633 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r28pm\" (UniqueName: \"kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm\") pod \"d14500ed-3452-479b-b86a-d000ba46cdc5\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.051713 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util\") pod \"d14500ed-3452-479b-b86a-d000ba46cdc5\" (UID: \"d14500ed-3452-479b-b86a-d000ba46cdc5\") " Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.052934 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle" (OuterVolumeSpecName: "bundle") pod "d14500ed-3452-479b-b86a-d000ba46cdc5" (UID: "d14500ed-3452-479b-b86a-d000ba46cdc5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.058747 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm" (OuterVolumeSpecName: "kube-api-access-r28pm") pod "d14500ed-3452-479b-b86a-d000ba46cdc5" (UID: "d14500ed-3452-479b-b86a-d000ba46cdc5"). InnerVolumeSpecName "kube-api-access-r28pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.072712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util" (OuterVolumeSpecName: "util") pod "d14500ed-3452-479b-b86a-d000ba46cdc5" (UID: "d14500ed-3452-479b-b86a-d000ba46cdc5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.153646 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r28pm\" (UniqueName: \"kubernetes.io/projected/d14500ed-3452-479b-b86a-d000ba46cdc5-kube-api-access-r28pm\") on node \"crc\" DevicePath \"\"" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.153681 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.153692 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d14500ed-3452-479b-b86a-d000ba46cdc5-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.614215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" event={"ID":"d14500ed-3452-479b-b86a-d000ba46cdc5","Type":"ContainerDied","Data":"100e0c324b14cace3e1224ed98bb01a63caa189a932c85059ee7046aacd13407"} Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.614646 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="100e0c324b14cace3e1224ed98bb01a63caa189a932c85059ee7046aacd13407" Jan 30 16:10:41 crc kubenswrapper[4740]: I0130 16:10:41.614272 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.662033 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dw9f7"] Jan 30 16:10:44 crc kubenswrapper[4740]: E0130 16:10:44.663818 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="pull" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.663862 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="pull" Jan 30 16:10:44 crc kubenswrapper[4740]: E0130 16:10:44.663898 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="util" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.663906 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="util" Jan 30 16:10:44 crc kubenswrapper[4740]: E0130 16:10:44.663925 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="extract" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.663931 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="extract" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.664073 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14500ed-3452-479b-b86a-d000ba46cdc5" containerName="extract" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.664700 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.667455 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.667689 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-l7c98" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.667755 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.677378 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dw9f7"] Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.706607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw264\" (UniqueName: \"kubernetes.io/projected/27f0888f-27f8-4ebe-86ed-a07a0995a241-kube-api-access-kw264\") pod \"nmstate-operator-646758c888-dw9f7\" (UID: \"27f0888f-27f8-4ebe-86ed-a07a0995a241\") " pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.807618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw264\" (UniqueName: \"kubernetes.io/projected/27f0888f-27f8-4ebe-86ed-a07a0995a241-kube-api-access-kw264\") pod \"nmstate-operator-646758c888-dw9f7\" (UID: \"27f0888f-27f8-4ebe-86ed-a07a0995a241\") " pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.829648 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw264\" (UniqueName: \"kubernetes.io/projected/27f0888f-27f8-4ebe-86ed-a07a0995a241-kube-api-access-kw264\") pod \"nmstate-operator-646758c888-dw9f7\" (UID: \"27f0888f-27f8-4ebe-86ed-a07a0995a241\") " pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" Jan 30 16:10:44 crc kubenswrapper[4740]: I0130 16:10:44.979154 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" Jan 30 16:10:45 crc kubenswrapper[4740]: I0130 16:10:45.473206 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-dw9f7"] Jan 30 16:10:45 crc kubenswrapper[4740]: I0130 16:10:45.642159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" event={"ID":"27f0888f-27f8-4ebe-86ed-a07a0995a241","Type":"ContainerStarted","Data":"2f31db7f3be23e2706039459c0cc95c2af585fb43ae001e2d8928ea26c9f32c3"} Jan 30 16:10:48 crc kubenswrapper[4740]: I0130 16:10:48.667382 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" event={"ID":"27f0888f-27f8-4ebe-86ed-a07a0995a241","Type":"ContainerStarted","Data":"1697bb8b651ec8056a2517e2817cd0f31aa1b3fc58a95d4fbceace4c085f1076"} Jan 30 16:10:48 crc kubenswrapper[4740]: I0130 16:10:48.688282 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-dw9f7" podStartSLOduration=2.2519496 podStartE2EDuration="4.688256415s" podCreationTimestamp="2026-01-30 16:10:44 +0000 UTC" firstStartedPulling="2026-01-30 16:10:45.485399283 +0000 UTC m=+894.122461892" lastFinishedPulling="2026-01-30 16:10:47.921706108 +0000 UTC m=+896.558768707" observedRunningTime="2026-01-30 16:10:48.686015679 +0000 UTC m=+897.323078298" watchObservedRunningTime="2026-01-30 16:10:48.688256415 +0000 UTC m=+897.325319014" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.727430 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lb5hz"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.729249 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.732061 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.732674 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wz9fn" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.732844 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.738659 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.751441 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hs8jj"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.752298 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.757443 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lb5hz"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.766426 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881761 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-nmstate-lock\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-ovs-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881843 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsb46\" (UniqueName: \"kubernetes.io/projected/01faa5ac-05c7-44cf-a393-e67e5e47c683-kube-api-access-vsb46\") pod \"nmstate-metrics-54757c584b-lb5hz\" (UID: \"01faa5ac-05c7-44cf-a393-e67e5e47c683\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881871 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgj6\" (UniqueName: \"kubernetes.io/projected/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-kube-api-access-hrgj6\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881922 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rglck\" (UniqueName: \"kubernetes.io/projected/479ee03d-d745-43ab-83d0-46f6e4cf1a21-kube-api-access-rglck\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.881971 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.882001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-dbus-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.924721 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr"] Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.925500 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.930810 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rb4lb" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.930889 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.931019 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-nmstate-lock\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983308 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-ovs-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983357 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsb46\" (UniqueName: \"kubernetes.io/projected/01faa5ac-05c7-44cf-a393-e67e5e47c683-kube-api-access-vsb46\") pod \"nmstate-metrics-54757c584b-lb5hz\" (UID: \"01faa5ac-05c7-44cf-a393-e67e5e47c683\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983381 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrgj6\" (UniqueName: \"kubernetes.io/projected/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-kube-api-access-hrgj6\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983402 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rglck\" (UniqueName: \"kubernetes.io/projected/479ee03d-d745-43ab-83d0-46f6e4cf1a21-kube-api-access-rglck\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983454 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-dbus-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983797 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-nmstate-lock\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983860 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-dbus-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.983880 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/479ee03d-d745-43ab-83d0-46f6e4cf1a21-ovs-socket\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:49 crc kubenswrapper[4740]: E0130 16:10:49.984147 4740 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 16:10:49 crc kubenswrapper[4740]: E0130 16:10:49.984201 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair podName:3ba55d47-87a4-4a5e-b3a7-9a737aef9125 nodeName:}" failed. No retries permitted until 2026-01-30 16:10:50.484184167 +0000 UTC m=+899.121246766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-tf56m" (UID: "3ba55d47-87a4-4a5e-b3a7-9a737aef9125") : secret "openshift-nmstate-webhook" not found Jan 30 16:10:49 crc kubenswrapper[4740]: I0130 16:10:49.988972 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr"] Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.012116 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rglck\" (UniqueName: \"kubernetes.io/projected/479ee03d-d745-43ab-83d0-46f6e4cf1a21-kube-api-access-rglck\") pod \"nmstate-handler-hs8jj\" (UID: \"479ee03d-d745-43ab-83d0-46f6e4cf1a21\") " pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.012328 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrgj6\" (UniqueName: \"kubernetes.io/projected/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-kube-api-access-hrgj6\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.015986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsb46\" (UniqueName: \"kubernetes.io/projected/01faa5ac-05c7-44cf-a393-e67e5e47c683-kube-api-access-vsb46\") pod \"nmstate-metrics-54757c584b-lb5hz\" (UID: \"01faa5ac-05c7-44cf-a393-e67e5e47c683\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.054852 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.076084 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.090192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0f05dfb1-ebdb-4b8d-8699-1b254807132b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.090246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v9k\" (UniqueName: \"kubernetes.io/projected/0f05dfb1-ebdb-4b8d-8699-1b254807132b-kube-api-access-g6v9k\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.090293 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: W0130 16:10:50.127117 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod479ee03d_d745_43ab_83d0_46f6e4cf1a21.slice/crio-de795a0a82fd45a24a0ea851eae58e985e2c32a374c36eceb0fea7d7f4f825d2 WatchSource:0}: Error finding container de795a0a82fd45a24a0ea851eae58e985e2c32a374c36eceb0fea7d7f4f825d2: Status 404 returned error can't find the container with id de795a0a82fd45a24a0ea851eae58e985e2c32a374c36eceb0fea7d7f4f825d2 Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.134703 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-765d8c459c-dtnpz"] Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.135882 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.154469 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-765d8c459c-dtnpz"] Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.191725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.191816 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0f05dfb1-ebdb-4b8d-8699-1b254807132b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.191847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v9k\" (UniqueName: \"kubernetes.io/projected/0f05dfb1-ebdb-4b8d-8699-1b254807132b-kube-api-access-g6v9k\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: E0130 16:10:50.192191 4740 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 30 16:10:50 crc kubenswrapper[4740]: E0130 16:10:50.192315 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert podName:0f05dfb1-ebdb-4b8d-8699-1b254807132b nodeName:}" failed. No retries permitted until 2026-01-30 16:10:50.692296524 +0000 UTC m=+899.329359123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-b7cbr" (UID: "0f05dfb1-ebdb-4b8d-8699-1b254807132b") : secret "plugin-serving-cert" not found Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.193783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0f05dfb1-ebdb-4b8d-8699-1b254807132b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.218087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v9k\" (UniqueName: \"kubernetes.io/projected/0f05dfb1-ebdb-4b8d-8699-1b254807132b-kube-api-access-g6v9k\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293678 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-oauth-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-trusted-ca-bundle\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293804 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnkj\" (UniqueName: \"kubernetes.io/projected/794af0ca-8b79-420b-9995-3ec872d4dc2d-kube-api-access-9bnkj\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293835 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293912 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-oauth-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.293965 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-service-ca\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.294037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.339935 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lb5hz"] Jan 30 16:10:50 crc kubenswrapper[4740]: W0130 16:10:50.347751 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01faa5ac_05c7_44cf_a393_e67e5e47c683.slice/crio-79acd16ce059373eb821094a411fd35444e465e132793a169ab4b93b8a4f381b WatchSource:0}: Error finding container 79acd16ce059373eb821094a411fd35444e465e132793a169ab4b93b8a4f381b: Status 404 returned error can't find the container with id 79acd16ce059373eb821094a411fd35444e465e132793a169ab4b93b8a4f381b Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.395333 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-trusted-ca-bundle\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.395393 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnkj\" (UniqueName: \"kubernetes.io/projected/794af0ca-8b79-420b-9995-3ec872d4dc2d-kube-api-access-9bnkj\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.395435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.395486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-oauth-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.396941 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-service-ca\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.396895 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-oauth-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.397037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.397069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.397090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-oauth-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.397543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-trusted-ca-bundle\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.397709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/794af0ca-8b79-420b-9995-3ec872d4dc2d-service-ca\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.409109 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-serving-cert\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.409890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/794af0ca-8b79-420b-9995-3ec872d4dc2d-console-oauth-config\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.411801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnkj\" (UniqueName: \"kubernetes.io/projected/794af0ca-8b79-420b-9995-3ec872d4dc2d-kube-api-access-9bnkj\") pod \"console-765d8c459c-dtnpz\" (UID: \"794af0ca-8b79-420b-9995-3ec872d4dc2d\") " pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.459323 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.501897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.507403 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3ba55d47-87a4-4a5e-b3a7-9a737aef9125-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-tf56m\" (UID: \"3ba55d47-87a4-4a5e-b3a7-9a737aef9125\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.666041 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.693584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" event={"ID":"01faa5ac-05c7-44cf-a393-e67e5e47c683","Type":"ContainerStarted","Data":"79acd16ce059373eb821094a411fd35444e465e132793a169ab4b93b8a4f381b"} Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.694497 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hs8jj" event={"ID":"479ee03d-d745-43ab-83d0-46f6e4cf1a21","Type":"ContainerStarted","Data":"de795a0a82fd45a24a0ea851eae58e985e2c32a374c36eceb0fea7d7f4f825d2"} Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.706588 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.713407 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0f05dfb1-ebdb-4b8d-8699-1b254807132b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b7cbr\" (UID: \"0f05dfb1-ebdb-4b8d-8699-1b254807132b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.777480 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-765d8c459c-dtnpz"] Jan 30 16:10:50 crc kubenswrapper[4740]: W0130 16:10:50.781114 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod794af0ca_8b79_420b_9995_3ec872d4dc2d.slice/crio-d42493b8420287914b618be739156301a72a264eae3eb841c2d7b307bbb6d951 WatchSource:0}: Error finding container d42493b8420287914b618be739156301a72a264eae3eb841c2d7b307bbb6d951: Status 404 returned error can't find the container with id d42493b8420287914b618be739156301a72a264eae3eb841c2d7b307bbb6d951 Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.844479 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" Jan 30 16:10:50 crc kubenswrapper[4740]: I0130 16:10:50.961780 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m"] Jan 30 16:10:50 crc kubenswrapper[4740]: W0130 16:10:50.973454 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ba55d47_87a4_4a5e_b3a7_9a737aef9125.slice/crio-ea42e22b61bb3943d090ae95f05dc1b4fdc00e0de07e9c96d22df749fbdcdfd9 WatchSource:0}: Error finding container ea42e22b61bb3943d090ae95f05dc1b4fdc00e0de07e9c96d22df749fbdcdfd9: Status 404 returned error can't find the container with id ea42e22b61bb3943d090ae95f05dc1b4fdc00e0de07e9c96d22df749fbdcdfd9 Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.286218 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr"] Jan 30 16:10:51 crc kubenswrapper[4740]: W0130 16:10:51.294109 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f05dfb1_ebdb_4b8d_8699_1b254807132b.slice/crio-b6c7ae99f15d06c560d6b42731966b78af2d15166c25f0e12343de3d01579fc2 WatchSource:0}: Error finding container b6c7ae99f15d06c560d6b42731966b78af2d15166c25f0e12343de3d01579fc2: Status 404 returned error can't find the container with id b6c7ae99f15d06c560d6b42731966b78af2d15166c25f0e12343de3d01579fc2 Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.706221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" event={"ID":"0f05dfb1-ebdb-4b8d-8699-1b254807132b","Type":"ContainerStarted","Data":"b6c7ae99f15d06c560d6b42731966b78af2d15166c25f0e12343de3d01579fc2"} Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.710487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-765d8c459c-dtnpz" event={"ID":"794af0ca-8b79-420b-9995-3ec872d4dc2d","Type":"ContainerStarted","Data":"4598bea3d99f368f422303a14dbf717cdc7091bb1e5f8a8984c56d55bb2aef35"} Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.710550 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-765d8c459c-dtnpz" event={"ID":"794af0ca-8b79-420b-9995-3ec872d4dc2d","Type":"ContainerStarted","Data":"d42493b8420287914b618be739156301a72a264eae3eb841c2d7b307bbb6d951"} Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.724635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" event={"ID":"3ba55d47-87a4-4a5e-b3a7-9a737aef9125","Type":"ContainerStarted","Data":"ea42e22b61bb3943d090ae95f05dc1b4fdc00e0de07e9c96d22df749fbdcdfd9"} Jan 30 16:10:51 crc kubenswrapper[4740]: I0130 16:10:51.742108 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-765d8c459c-dtnpz" podStartSLOduration=1.7420818329999999 podStartE2EDuration="1.742081833s" podCreationTimestamp="2026-01-30 16:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:10:51.72991645 +0000 UTC m=+900.366979049" watchObservedRunningTime="2026-01-30 16:10:51.742081833 +0000 UTC m=+900.379144472" Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.745101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hs8jj" event={"ID":"479ee03d-d745-43ab-83d0-46f6e4cf1a21","Type":"ContainerStarted","Data":"d7492602a7ba1291a9d1f97ef0d203ac724bd6a8e65857e76c1591d9aa56e340"} Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.746064 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.749710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" event={"ID":"3ba55d47-87a4-4a5e-b3a7-9a737aef9125","Type":"ContainerStarted","Data":"774d7eb7092d5471e1e8084cec9cc30fa0e88a34212882b5f54e674b770163a8"} Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.750313 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.757841 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" event={"ID":"01faa5ac-05c7-44cf-a393-e67e5e47c683","Type":"ContainerStarted","Data":"a44a8d54385a57c7caa56d8e3f32e0dbce6f19203cf0310ed5803087f493af80"} Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.770034 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hs8jj" podStartSLOduration=1.862227003 podStartE2EDuration="4.76999086s" podCreationTimestamp="2026-01-30 16:10:49 +0000 UTC" firstStartedPulling="2026-01-30 16:10:50.146904603 +0000 UTC m=+898.783967202" lastFinishedPulling="2026-01-30 16:10:53.05466842 +0000 UTC m=+901.691731059" observedRunningTime="2026-01-30 16:10:53.762515033 +0000 UTC m=+902.399577632" watchObservedRunningTime="2026-01-30 16:10:53.76999086 +0000 UTC m=+902.407053459" Jan 30 16:10:53 crc kubenswrapper[4740]: I0130 16:10:53.786485 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" podStartSLOduration=2.690287192 podStartE2EDuration="4.78646239s" podCreationTimestamp="2026-01-30 16:10:49 +0000 UTC" firstStartedPulling="2026-01-30 16:10:50.97689007 +0000 UTC m=+899.613952669" lastFinishedPulling="2026-01-30 16:10:53.073065278 +0000 UTC m=+901.710127867" observedRunningTime="2026-01-30 16:10:53.781529607 +0000 UTC m=+902.418592206" watchObservedRunningTime="2026-01-30 16:10:53.78646239 +0000 UTC m=+902.423524989" Jan 30 16:10:55 crc kubenswrapper[4740]: I0130 16:10:55.775713 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" event={"ID":"0f05dfb1-ebdb-4b8d-8699-1b254807132b","Type":"ContainerStarted","Data":"56c9ffc9b02067a75521862e9ed18494e96c2296a589795beb8d966b272f5500"} Jan 30 16:10:55 crc kubenswrapper[4740]: I0130 16:10:55.795716 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b7cbr" podStartSLOduration=3.357762059 podStartE2EDuration="6.795691741s" podCreationTimestamp="2026-01-30 16:10:49 +0000 UTC" firstStartedPulling="2026-01-30 16:10:51.296609929 +0000 UTC m=+899.933672528" lastFinishedPulling="2026-01-30 16:10:54.734539611 +0000 UTC m=+903.371602210" observedRunningTime="2026-01-30 16:10:55.793699591 +0000 UTC m=+904.430762190" watchObservedRunningTime="2026-01-30 16:10:55.795691741 +0000 UTC m=+904.432754330" Jan 30 16:10:56 crc kubenswrapper[4740]: I0130 16:10:56.783525 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" event={"ID":"01faa5ac-05c7-44cf-a393-e67e5e47c683","Type":"ContainerStarted","Data":"80555c9c20fcc06032892ed0f5c7677adfbfd4e78398f0dca74254fc99f58641"} Jan 30 16:10:56 crc kubenswrapper[4740]: I0130 16:10:56.806602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-lb5hz" podStartSLOduration=1.879056722 podStartE2EDuration="7.806580738s" podCreationTimestamp="2026-01-30 16:10:49 +0000 UTC" firstStartedPulling="2026-01-30 16:10:50.350689092 +0000 UTC m=+898.987751681" lastFinishedPulling="2026-01-30 16:10:56.278213098 +0000 UTC m=+904.915275697" observedRunningTime="2026-01-30 16:10:56.805893871 +0000 UTC m=+905.442956470" watchObservedRunningTime="2026-01-30 16:10:56.806580738 +0000 UTC m=+905.443643347" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.116042 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hs8jj" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.459526 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.459687 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.466391 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.818763 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-765d8c459c-dtnpz" Jan 30 16:11:00 crc kubenswrapper[4740]: I0130 16:11:00.893871 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 16:11:10 crc kubenswrapper[4740]: I0130 16:11:10.675410 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-tf56m" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.673816 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.678390 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.686489 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.787893 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.787969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rv62\" (UniqueName: \"kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.788076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.889662 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.890583 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.890629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rv62\" (UniqueName: \"kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.890452 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.891216 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:24 crc kubenswrapper[4740]: I0130 16:11:24.920524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rv62\" (UniqueName: \"kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62\") pod \"redhat-marketplace-qxh7x\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:25 crc kubenswrapper[4740]: I0130 16:11:25.005938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:25 crc kubenswrapper[4740]: I0130 16:11:25.465926 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:25 crc kubenswrapper[4740]: I0130 16:11:25.959117 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5q9nt" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" containerID="cri-o://452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf" gracePeriod=15 Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.017000 4740 generic.go:334] "Generic (PLEG): container finished" podID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerID="b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22" exitCode=0 Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.017107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerDied","Data":"b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22"} Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.017163 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerStarted","Data":"11852a3482a14d29401193f62f404ad8cc4aa28b3a78167db52e0b6056daf266"} Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.563631 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5q9nt_d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9/console/0.log" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.564103 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.617722 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.617829 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.617866 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.617935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.617993 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjd6\" (UniqueName: \"kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.618010 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.618034 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config\") pod \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\" (UID: \"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9\") " Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.619435 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.619764 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.620617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca" (OuterVolumeSpecName: "service-ca") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.620731 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config" (OuterVolumeSpecName: "console-config") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.627156 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.627179 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6" (OuterVolumeSpecName: "kube-api-access-zjjd6") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "kube-api-access-zjjd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.628212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" (UID: "d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.719980 4740 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720046 4740 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720068 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjd6\" (UniqueName: \"kubernetes.io/projected/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-kube-api-access-zjjd6\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720081 4740 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720094 4740 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720105 4740 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.720117 4740 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.887970 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h"] Jan 30 16:11:26 crc kubenswrapper[4740]: E0130 16:11:26.888389 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.888410 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.888707 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerName="console" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.890307 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.893059 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.894308 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h"] Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.923732 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.923801 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:26 crc kubenswrapper[4740]: I0130 16:11:26.923867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gct5m\" (UniqueName: \"kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.026674 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gct5m\" (UniqueName: \"kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.026804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.026873 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.027624 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.027745 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5q9nt_d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9/console/0.log" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028069 4740 generic.go:334] "Generic (PLEG): container finished" podID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" containerID="452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf" exitCode=2 Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028112 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5q9nt" event={"ID":"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9","Type":"ContainerDied","Data":"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf"} Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028150 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5q9nt" event={"ID":"d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9","Type":"ContainerDied","Data":"71af78eee9c9078358eefbf291d2cf2c028b35d41088363e9bf4f4e1d4a8d074"} Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028158 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5q9nt" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.028179 4740 scope.go:117] "RemoveContainer" containerID="452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.057531 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gct5m\" (UniqueName: \"kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.080580 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.088963 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5q9nt"] Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.152577 4740 scope.go:117] "RemoveContainer" containerID="452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf" Jan 30 16:11:27 crc kubenswrapper[4740]: E0130 16:11:27.156458 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf\": container with ID starting with 452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf not found: ID does not exist" containerID="452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.156494 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf"} err="failed to get container status \"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf\": rpc error: code = NotFound desc = could not find container \"452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf\": container with ID starting with 452fd9bfdc67aaae15ba07d61cccfab68f2a5a9d4b9beec6d69807e46e78f2cf not found: ID does not exist" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.209213 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.344211 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9" path="/var/lib/kubelet/pods/d3c856bb-ba47-462b-b4ed-31fd8a5e7ba9/volumes" Jan 30 16:11:27 crc kubenswrapper[4740]: I0130 16:11:27.709459 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h"] Jan 30 16:11:28 crc kubenswrapper[4740]: I0130 16:11:28.039680 4740 generic.go:334] "Generic (PLEG): container finished" podID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerID="69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc" exitCode=0 Jan 30 16:11:28 crc kubenswrapper[4740]: I0130 16:11:28.039769 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerDied","Data":"69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc"} Jan 30 16:11:28 crc kubenswrapper[4740]: I0130 16:11:28.044220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerStarted","Data":"8b3257e1c361a4758266959c9c44498b6d943a035c27cdf0d72fddfcbc70125f"} Jan 30 16:11:28 crc kubenswrapper[4740]: I0130 16:11:28.044328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerStarted","Data":"a9b46446ac5d13fcad1db58c8400f55fdd2a2ffab2bcb2dd819a23050e2cb111"} Jan 30 16:11:29 crc kubenswrapper[4740]: I0130 16:11:29.055573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerStarted","Data":"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715"} Jan 30 16:11:29 crc kubenswrapper[4740]: I0130 16:11:29.057829 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerID="8b3257e1c361a4758266959c9c44498b6d943a035c27cdf0d72fddfcbc70125f" exitCode=0 Jan 30 16:11:29 crc kubenswrapper[4740]: I0130 16:11:29.057871 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerDied","Data":"8b3257e1c361a4758266959c9c44498b6d943a035c27cdf0d72fddfcbc70125f"} Jan 30 16:11:29 crc kubenswrapper[4740]: I0130 16:11:29.082378 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qxh7x" podStartSLOduration=2.379338291 podStartE2EDuration="5.082338164s" podCreationTimestamp="2026-01-30 16:11:24 +0000 UTC" firstStartedPulling="2026-01-30 16:11:26.020067896 +0000 UTC m=+934.657130505" lastFinishedPulling="2026-01-30 16:11:28.723067779 +0000 UTC m=+937.360130378" observedRunningTime="2026-01-30 16:11:29.077511754 +0000 UTC m=+937.714574343" watchObservedRunningTime="2026-01-30 16:11:29.082338164 +0000 UTC m=+937.719400763" Jan 30 16:11:31 crc kubenswrapper[4740]: I0130 16:11:31.071224 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerID="aeef678b3f34b4f54d8b9dfafe3085f98058a3f25acb4ae0b51bc3d99a261a20" exitCode=0 Jan 30 16:11:31 crc kubenswrapper[4740]: I0130 16:11:31.071311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerDied","Data":"aeef678b3f34b4f54d8b9dfafe3085f98058a3f25acb4ae0b51bc3d99a261a20"} Jan 30 16:11:32 crc kubenswrapper[4740]: I0130 16:11:32.082136 4740 generic.go:334] "Generic (PLEG): container finished" podID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerID="7181db768066dc23854c5b65ea17ee03f7922a85ba0aeb39cef31867b01485fd" exitCode=0 Jan 30 16:11:32 crc kubenswrapper[4740]: I0130 16:11:32.082196 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerDied","Data":"7181db768066dc23854c5b65ea17ee03f7922a85ba0aeb39cef31867b01485fd"} Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.339252 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.528048 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util\") pod \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.528191 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle\") pod \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.528234 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gct5m\" (UniqueName: \"kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m\") pod \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\" (UID: \"e4ef49bf-103d-4989-a9aa-52c98c542c3d\") " Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.529365 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle" (OuterVolumeSpecName: "bundle") pod "e4ef49bf-103d-4989-a9aa-52c98c542c3d" (UID: "e4ef49bf-103d-4989-a9aa-52c98c542c3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.534517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m" (OuterVolumeSpecName: "kube-api-access-gct5m") pod "e4ef49bf-103d-4989-a9aa-52c98c542c3d" (UID: "e4ef49bf-103d-4989-a9aa-52c98c542c3d"). InnerVolumeSpecName "kube-api-access-gct5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.629655 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gct5m\" (UniqueName: \"kubernetes.io/projected/e4ef49bf-103d-4989-a9aa-52c98c542c3d-kube-api-access-gct5m\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:33 crc kubenswrapper[4740]: I0130 16:11:33.629707 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:34 crc kubenswrapper[4740]: I0130 16:11:34.098506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" event={"ID":"e4ef49bf-103d-4989-a9aa-52c98c542c3d","Type":"ContainerDied","Data":"a9b46446ac5d13fcad1db58c8400f55fdd2a2ffab2bcb2dd819a23050e2cb111"} Jan 30 16:11:34 crc kubenswrapper[4740]: I0130 16:11:34.098583 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b46446ac5d13fcad1db58c8400f55fdd2a2ffab2bcb2dd819a23050e2cb111" Jan 30 16:11:34 crc kubenswrapper[4740]: I0130 16:11:34.098609 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h" Jan 30 16:11:34 crc kubenswrapper[4740]: I0130 16:11:34.300682 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util" (OuterVolumeSpecName: "util") pod "e4ef49bf-103d-4989-a9aa-52c98c542c3d" (UID: "e4ef49bf-103d-4989-a9aa-52c98c542c3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:11:34 crc kubenswrapper[4740]: I0130 16:11:34.338224 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4ef49bf-103d-4989-a9aa-52c98c542c3d-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:35 crc kubenswrapper[4740]: I0130 16:11:35.007778 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:35 crc kubenswrapper[4740]: I0130 16:11:35.008481 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:35 crc kubenswrapper[4740]: I0130 16:11:35.055545 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:35 crc kubenswrapper[4740]: I0130 16:11:35.177231 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.425104 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.425759 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qxh7x" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="registry-server" containerID="cri-o://19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715" gracePeriod=2 Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.857633 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.988973 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content\") pod \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.989058 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rv62\" (UniqueName: \"kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62\") pod \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.989162 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities\") pod \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\" (UID: \"18e5d58d-0940-41d0-9e27-f8b492c43f5b\") " Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.996438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities" (OuterVolumeSpecName: "utilities") pod "18e5d58d-0940-41d0-9e27-f8b492c43f5b" (UID: "18e5d58d-0940-41d0-9e27-f8b492c43f5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:11:37 crc kubenswrapper[4740]: I0130 16:11:37.996549 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62" (OuterVolumeSpecName: "kube-api-access-7rv62") pod "18e5d58d-0940-41d0-9e27-f8b492c43f5b" (UID: "18e5d58d-0940-41d0-9e27-f8b492c43f5b"). InnerVolumeSpecName "kube-api-access-7rv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.014634 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18e5d58d-0940-41d0-9e27-f8b492c43f5b" (UID: "18e5d58d-0940-41d0-9e27-f8b492c43f5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.090608 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rv62\" (UniqueName: \"kubernetes.io/projected/18e5d58d-0940-41d0-9e27-f8b492c43f5b-kube-api-access-7rv62\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.090652 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.090666 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e5d58d-0940-41d0-9e27-f8b492c43f5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.128590 4740 generic.go:334] "Generic (PLEG): container finished" podID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerID="19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715" exitCode=0 Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.128652 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerDied","Data":"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715"} Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.128689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxh7x" event={"ID":"18e5d58d-0940-41d0-9e27-f8b492c43f5b","Type":"ContainerDied","Data":"11852a3482a14d29401193f62f404ad8cc4aa28b3a78167db52e0b6056daf266"} Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.128712 4740 scope.go:117] "RemoveContainer" containerID="19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.128877 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxh7x" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.153312 4740 scope.go:117] "RemoveContainer" containerID="69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.175456 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.182183 4740 scope.go:117] "RemoveContainer" containerID="b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.186625 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxh7x"] Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.205279 4740 scope.go:117] "RemoveContainer" containerID="19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715" Jan 30 16:11:38 crc kubenswrapper[4740]: E0130 16:11:38.206042 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715\": container with ID starting with 19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715 not found: ID does not exist" containerID="19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.206127 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715"} err="failed to get container status \"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715\": rpc error: code = NotFound desc = could not find container \"19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715\": container with ID starting with 19d62f04a51c7465cb10d3636ea7d77728b0dbdfb4b2360bcc888e2697ef2715 not found: ID does not exist" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.206159 4740 scope.go:117] "RemoveContainer" containerID="69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc" Jan 30 16:11:38 crc kubenswrapper[4740]: E0130 16:11:38.206570 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc\": container with ID starting with 69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc not found: ID does not exist" containerID="69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.206605 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc"} err="failed to get container status \"69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc\": rpc error: code = NotFound desc = could not find container \"69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc\": container with ID starting with 69a17297a80be550c09764ac430722c22d3bf9794ce0280ce6e35ae196fd05dc not found: ID does not exist" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.206630 4740 scope.go:117] "RemoveContainer" containerID="b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22" Jan 30 16:11:38 crc kubenswrapper[4740]: E0130 16:11:38.207006 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22\": container with ID starting with b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22 not found: ID does not exist" containerID="b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22" Jan 30 16:11:38 crc kubenswrapper[4740]: I0130 16:11:38.207060 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22"} err="failed to get container status \"b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22\": rpc error: code = NotFound desc = could not find container \"b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22\": container with ID starting with b74e99e5436123653e3040b6a664ad0642ecf725e242222e04fb75736ac50d22 not found: ID does not exist" Jan 30 16:11:39 crc kubenswrapper[4740]: I0130 16:11:39.346403 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" path="/var/lib/kubelet/pods/18e5d58d-0940-41d0-9e27-f8b492c43f5b/volumes" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368139 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw"] Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368813 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="extract-utilities" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368827 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="extract-utilities" Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368842 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="registry-server" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368848 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="registry-server" Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368862 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="extract-content" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368868 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="extract-content" Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368880 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="pull" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368887 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="pull" Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368897 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="util" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368903 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="util" Jan 30 16:11:44 crc kubenswrapper[4740]: E0130 16:11:44.368917 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="extract" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.368922 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="extract" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.369022 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ef49bf-103d-4989-a9aa-52c98c542c3d" containerName="extract" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.369038 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e5d58d-0940-41d0-9e27-f8b492c43f5b" containerName="registry-server" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.369581 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.375083 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.375864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-n9k62" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.376732 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.376988 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.377209 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.402789 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw"] Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.476319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-apiservice-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.476436 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-webhook-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.476487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987wp\" (UniqueName: \"kubernetes.io/projected/de72a0d2-8f4e-442e-99e0-8179782f810b-kube-api-access-987wp\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.577407 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-apiservice-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.577477 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-webhook-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.577534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-987wp\" (UniqueName: \"kubernetes.io/projected/de72a0d2-8f4e-442e-99e0-8179782f810b-kube-api-access-987wp\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.590719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-apiservice-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.591118 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/de72a0d2-8f4e-442e-99e0-8179782f810b-webhook-cert\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.601119 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-987wp\" (UniqueName: \"kubernetes.io/projected/de72a0d2-8f4e-442e-99e0-8179782f810b-kube-api-access-987wp\") pod \"metallb-operator-controller-manager-679cd9954d-7f5xw\" (UID: \"de72a0d2-8f4e-442e-99e0-8179782f810b\") " pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.687610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.689764 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f44447989-gnfds"] Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.690774 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.693941 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6bddt" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.694581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.694963 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.720615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f44447989-gnfds"] Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.885289 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-webhook-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.885853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-apiservice-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.885893 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z764n\" (UniqueName: \"kubernetes.io/projected/7e49666f-5b34-430c-bfa4-c85208433cda-kube-api-access-z764n\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.987132 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-apiservice-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.987191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z764n\" (UniqueName: \"kubernetes.io/projected/7e49666f-5b34-430c-bfa4-c85208433cda-kube-api-access-z764n\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:44 crc kubenswrapper[4740]: I0130 16:11:44.987221 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-webhook-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:44.997095 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-webhook-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:44.997120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7e49666f-5b34-430c-bfa4-c85208433cda-apiservice-cert\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:45.016895 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z764n\" (UniqueName: \"kubernetes.io/projected/7e49666f-5b34-430c-bfa4-c85208433cda-kube-api-access-z764n\") pod \"metallb-operator-webhook-server-7f44447989-gnfds\" (UID: \"7e49666f-5b34-430c-bfa4-c85208433cda\") " pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:45.072592 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:45.190038 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw"] Jan 30 16:11:45 crc kubenswrapper[4740]: I0130 16:11:45.524722 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f44447989-gnfds"] Jan 30 16:11:45 crc kubenswrapper[4740]: W0130 16:11:45.529182 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e49666f_5b34_430c_bfa4_c85208433cda.slice/crio-56f9930d1746f696cfeebb24df482b4bfe4d9879951e17ce283c184c8625cb68 WatchSource:0}: Error finding container 56f9930d1746f696cfeebb24df482b4bfe4d9879951e17ce283c184c8625cb68: Status 404 returned error can't find the container with id 56f9930d1746f696cfeebb24df482b4bfe4d9879951e17ce283c184c8625cb68 Jan 30 16:11:46 crc kubenswrapper[4740]: I0130 16:11:46.190306 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" event={"ID":"7e49666f-5b34-430c-bfa4-c85208433cda","Type":"ContainerStarted","Data":"56f9930d1746f696cfeebb24df482b4bfe4d9879951e17ce283c184c8625cb68"} Jan 30 16:11:46 crc kubenswrapper[4740]: I0130 16:11:46.191914 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" event={"ID":"de72a0d2-8f4e-442e-99e0-8179782f810b","Type":"ContainerStarted","Data":"a42fdec8125499fa3ac908720014ec917d5f48d2591830782b3b8d2628e2c965"} Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.255012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" event={"ID":"7e49666f-5b34-430c-bfa4-c85208433cda","Type":"ContainerStarted","Data":"71e58d688ff3ac49e65eb912691cef1fb73f21ade02091a30816d142f33a222f"} Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.255816 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.257292 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" event={"ID":"de72a0d2-8f4e-442e-99e0-8179782f810b","Type":"ContainerStarted","Data":"f3964e5eda9c8510292cc5e20cdd13a0e1d32572a1182a9e7eb2a4bcc23e9441"} Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.257488 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.294835 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" podStartSLOduration=2.601384183 podStartE2EDuration="9.294806879s" podCreationTimestamp="2026-01-30 16:11:44 +0000 UTC" firstStartedPulling="2026-01-30 16:11:45.533157246 +0000 UTC m=+954.170219845" lastFinishedPulling="2026-01-30 16:11:52.226579942 +0000 UTC m=+960.863642541" observedRunningTime="2026-01-30 16:11:53.289019184 +0000 UTC m=+961.926081793" watchObservedRunningTime="2026-01-30 16:11:53.294806879 +0000 UTC m=+961.931869478" Jan 30 16:11:53 crc kubenswrapper[4740]: I0130 16:11:53.324971 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" podStartSLOduration=2.326969933 podStartE2EDuration="9.32495038s" podCreationTimestamp="2026-01-30 16:11:44 +0000 UTC" firstStartedPulling="2026-01-30 16:11:45.205327145 +0000 UTC m=+953.842389744" lastFinishedPulling="2026-01-30 16:11:52.203307592 +0000 UTC m=+960.840370191" observedRunningTime="2026-01-30 16:11:53.324619092 +0000 UTC m=+961.961681701" watchObservedRunningTime="2026-01-30 16:11:53.32495038 +0000 UTC m=+961.962012979" Jan 30 16:11:54 crc kubenswrapper[4740]: I0130 16:11:54.454920 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:11:54 crc kubenswrapper[4740]: I0130 16:11:54.455428 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:12:05 crc kubenswrapper[4740]: I0130 16:12:05.078743 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f44447989-gnfds" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.235612 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.236878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.296729 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.331439 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.331501 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.331703 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snl97\" (UniqueName: \"kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.433337 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.433503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snl97\" (UniqueName: \"kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.433552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.433972 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.434068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.456352 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snl97\" (UniqueName: \"kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97\") pod \"community-operators-v2n77\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.555697 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:07 crc kubenswrapper[4740]: I0130 16:12:07.912352 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:07 crc kubenswrapper[4740]: W0130 16:12:07.924078 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d8af77_0cd0_4e00_aa24_545f0c88b97a.slice/crio-413d2378e354c55307822951a7ad38ab8b62d4d9d1c7c97610790c57bbf17b32 WatchSource:0}: Error finding container 413d2378e354c55307822951a7ad38ab8b62d4d9d1c7c97610790c57bbf17b32: Status 404 returned error can't find the container with id 413d2378e354c55307822951a7ad38ab8b62d4d9d1c7c97610790c57bbf17b32 Jan 30 16:12:08 crc kubenswrapper[4740]: I0130 16:12:08.377455 4740 generic.go:334] "Generic (PLEG): container finished" podID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerID="f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886" exitCode=0 Jan 30 16:12:08 crc kubenswrapper[4740]: I0130 16:12:08.377519 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerDied","Data":"f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886"} Jan 30 16:12:08 crc kubenswrapper[4740]: I0130 16:12:08.377555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerStarted","Data":"413d2378e354c55307822951a7ad38ab8b62d4d9d1c7c97610790c57bbf17b32"} Jan 30 16:12:09 crc kubenswrapper[4740]: I0130 16:12:09.387075 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerStarted","Data":"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388"} Jan 30 16:12:10 crc kubenswrapper[4740]: I0130 16:12:10.396637 4740 generic.go:334] "Generic (PLEG): container finished" podID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerID="aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388" exitCode=0 Jan 30 16:12:10 crc kubenswrapper[4740]: I0130 16:12:10.396758 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerDied","Data":"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388"} Jan 30 16:12:11 crc kubenswrapper[4740]: I0130 16:12:11.408285 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerStarted","Data":"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b"} Jan 30 16:12:11 crc kubenswrapper[4740]: I0130 16:12:11.427215 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2n77" podStartSLOduration=1.877850196 podStartE2EDuration="4.427184025s" podCreationTimestamp="2026-01-30 16:12:07 +0000 UTC" firstStartedPulling="2026-01-30 16:12:08.379416869 +0000 UTC m=+977.016479468" lastFinishedPulling="2026-01-30 16:12:10.928750698 +0000 UTC m=+979.565813297" observedRunningTime="2026-01-30 16:12:11.42579149 +0000 UTC m=+980.062854089" watchObservedRunningTime="2026-01-30 16:12:11.427184025 +0000 UTC m=+980.064246624" Jan 30 16:12:17 crc kubenswrapper[4740]: I0130 16:12:17.556719 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:17 crc kubenswrapper[4740]: I0130 16:12:17.557486 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:17 crc kubenswrapper[4740]: I0130 16:12:17.625272 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.262990 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.264843 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.274905 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.399198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.399266 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.399415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgn9z\" (UniqueName: \"kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.501291 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgn9z\" (UniqueName: \"kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.501419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.501455 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.502061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.502114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.521134 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.527536 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgn9z\" (UniqueName: \"kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z\") pod \"certified-operators-jdfxs\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:18 crc kubenswrapper[4740]: I0130 16:12:18.590588 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:19 crc kubenswrapper[4740]: I0130 16:12:19.223314 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:19 crc kubenswrapper[4740]: I0130 16:12:19.467126 4740 generic.go:334] "Generic (PLEG): container finished" podID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerID="1f50f89f915289a726a829ebcbef1f198182e649161166b4b758996a797610a6" exitCode=0 Jan 30 16:12:19 crc kubenswrapper[4740]: I0130 16:12:19.467248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerDied","Data":"1f50f89f915289a726a829ebcbef1f198182e649161166b4b758996a797610a6"} Jan 30 16:12:19 crc kubenswrapper[4740]: I0130 16:12:19.467589 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerStarted","Data":"488cd143ee285a0a5513d703017e23204ce58d54c0a48c81130cd3bbcde099b3"} Jan 30 16:12:20 crc kubenswrapper[4740]: I0130 16:12:20.481715 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerStarted","Data":"9dc193355bed17ba089651c4ec2e3c04631792d1ea71066a41af158655f3f7be"} Jan 30 16:12:20 crc kubenswrapper[4740]: I0130 16:12:20.864206 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:20 crc kubenswrapper[4740]: I0130 16:12:20.864885 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v2n77" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="registry-server" containerID="cri-o://7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b" gracePeriod=2 Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.258303 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.450059 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content\") pod \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.450227 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities\") pod \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.450307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snl97\" (UniqueName: \"kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97\") pod \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\" (UID: \"83d8af77-0cd0-4e00-aa24-545f0c88b97a\") " Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.453121 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities" (OuterVolumeSpecName: "utilities") pod "83d8af77-0cd0-4e00-aa24-545f0c88b97a" (UID: "83d8af77-0cd0-4e00-aa24-545f0c88b97a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.458599 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97" (OuterVolumeSpecName: "kube-api-access-snl97") pod "83d8af77-0cd0-4e00-aa24-545f0c88b97a" (UID: "83d8af77-0cd0-4e00-aa24-545f0c88b97a"). InnerVolumeSpecName "kube-api-access-snl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.490193 4740 generic.go:334] "Generic (PLEG): container finished" podID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerID="7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b" exitCode=0 Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.490255 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2n77" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.490281 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerDied","Data":"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b"} Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.490320 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2n77" event={"ID":"83d8af77-0cd0-4e00-aa24-545f0c88b97a","Type":"ContainerDied","Data":"413d2378e354c55307822951a7ad38ab8b62d4d9d1c7c97610790c57bbf17b32"} Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.490341 4740 scope.go:117] "RemoveContainer" containerID="7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.494122 4740 generic.go:334] "Generic (PLEG): container finished" podID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerID="9dc193355bed17ba089651c4ec2e3c04631792d1ea71066a41af158655f3f7be" exitCode=0 Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.494161 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerDied","Data":"9dc193355bed17ba089651c4ec2e3c04631792d1ea71066a41af158655f3f7be"} Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.509777 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d8af77-0cd0-4e00-aa24-545f0c88b97a" (UID: "83d8af77-0cd0-4e00-aa24-545f0c88b97a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.519278 4740 scope.go:117] "RemoveContainer" containerID="aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.547995 4740 scope.go:117] "RemoveContainer" containerID="f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.552296 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.552333 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8af77-0cd0-4e00-aa24-545f0c88b97a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.552362 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snl97\" (UniqueName: \"kubernetes.io/projected/83d8af77-0cd0-4e00-aa24-545f0c88b97a-kube-api-access-snl97\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.573950 4740 scope.go:117] "RemoveContainer" containerID="7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b" Jan 30 16:12:21 crc kubenswrapper[4740]: E0130 16:12:21.574591 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b\": container with ID starting with 7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b not found: ID does not exist" containerID="7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.574642 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b"} err="failed to get container status \"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b\": rpc error: code = NotFound desc = could not find container \"7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b\": container with ID starting with 7b6bc32a1e7ffd6115dbeea1a0c0090b38982d21885275c43ae9a5405976f16b not found: ID does not exist" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.574674 4740 scope.go:117] "RemoveContainer" containerID="aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388" Jan 30 16:12:21 crc kubenswrapper[4740]: E0130 16:12:21.575031 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388\": container with ID starting with aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388 not found: ID does not exist" containerID="aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.575089 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388"} err="failed to get container status \"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388\": rpc error: code = NotFound desc = could not find container \"aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388\": container with ID starting with aff98f0453ef1b6dfacdf7e1f6b3d9a00d25cff6ef50cece0d7fa6f706eb5388 not found: ID does not exist" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.575125 4740 scope.go:117] "RemoveContainer" containerID="f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886" Jan 30 16:12:21 crc kubenswrapper[4740]: E0130 16:12:21.575495 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886\": container with ID starting with f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886 not found: ID does not exist" containerID="f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.575532 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886"} err="failed to get container status \"f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886\": rpc error: code = NotFound desc = could not find container \"f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886\": container with ID starting with f663f91e15feb1dd58dc684fe110198359b154a7faacf9ef897231694ed2b886 not found: ID does not exist" Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.831461 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:21 crc kubenswrapper[4740]: I0130 16:12:21.840854 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v2n77"] Jan 30 16:12:22 crc kubenswrapper[4740]: I0130 16:12:22.505637 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerStarted","Data":"7371610d072920c8577fc30d62eaeaac2b50b7f4afbf1c92032a575d118aae70"} Jan 30 16:12:22 crc kubenswrapper[4740]: I0130 16:12:22.560007 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jdfxs" podStartSLOduration=1.8504787820000002 podStartE2EDuration="4.559988598s" podCreationTimestamp="2026-01-30 16:12:18 +0000 UTC" firstStartedPulling="2026-01-30 16:12:19.469214081 +0000 UTC m=+988.106276680" lastFinishedPulling="2026-01-30 16:12:22.178723897 +0000 UTC m=+990.815786496" observedRunningTime="2026-01-30 16:12:22.559003773 +0000 UTC m=+991.196066372" watchObservedRunningTime="2026-01-30 16:12:22.559988598 +0000 UTC m=+991.197051187" Jan 30 16:12:23 crc kubenswrapper[4740]: I0130 16:12:23.368571 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" path="/var/lib/kubelet/pods/83d8af77-0cd0-4e00-aa24-545f0c88b97a/volumes" Jan 30 16:12:24 crc kubenswrapper[4740]: I0130 16:12:24.455031 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:12:24 crc kubenswrapper[4740]: I0130 16:12:24.455623 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:12:24 crc kubenswrapper[4740]: I0130 16:12:24.691751 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-679cd9954d-7f5xw" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.361540 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g"] Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.361998 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="extract-utilities" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.362017 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="extract-utilities" Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.362027 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="registry-server" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.362038 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="registry-server" Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.362054 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="extract-content" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.362062 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="extract-content" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.362199 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d8af77-0cd0-4e00-aa24-545f0c88b97a" containerName="registry-server" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.362806 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.365240 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.373022 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bmfdq"] Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.373720 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qzqtc" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.375685 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.381818 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.382102 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.387212 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g"] Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.478779 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lbsrp"] Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.479881 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.483794 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.483794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.483848 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.483915 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-l684p" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.500321 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-spzrl"] Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.501515 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.504485 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfgz\" (UniqueName: \"kubernetes.io/projected/229e5897-0e63-4b65-8142-77d97ef63ca3-kube-api-access-bbfgz\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507218 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsfw\" (UniqueName: \"kubernetes.io/projected/2602e38b-af1a-4ece-8430-1c1ba3fe5044-kube-api-access-qfsfw\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507341 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-sockets\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-startup\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-reloader\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229e5897-0e63-4b65-8142-77d97ef63ca3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.507503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-conf\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.515014 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-spzrl"] Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609481 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfgz\" (UniqueName: \"kubernetes.io/projected/229e5897-0e63-4b65-8142-77d97ef63ca3-kube-api-access-bbfgz\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metrics-certs\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609582 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metallb-excludel2\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609609 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-kube-api-access-49s6r\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609640 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsfw\" (UniqueName: \"kubernetes.io/projected/2602e38b-af1a-4ece-8430-1c1ba3fe5044-kube-api-access-qfsfw\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609665 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609693 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609782 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-sockets\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609812 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-startup\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609841 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-reloader\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229e5897-0e63-4b65-8142-77d97ef63ca3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-conf\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-cert\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.609991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x8n9\" (UniqueName: \"kubernetes.io/projected/2516854f-e7b5-4af2-a473-72ad1644043a-kube-api-access-4x8n9\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.610023 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-metrics-certs\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.610560 4740 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.610633 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs podName:2602e38b-af1a-4ece-8430-1c1ba3fe5044 nodeName:}" failed. No retries permitted until 2026-01-30 16:12:26.110614685 +0000 UTC m=+994.747677284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs") pod "frr-k8s-bmfdq" (UID: "2602e38b-af1a-4ece-8430-1c1ba3fe5044") : secret "frr-k8s-certs-secret" not found Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.611622 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-sockets\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.611827 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-conf\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.611937 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-reloader\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.612001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.612333 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2602e38b-af1a-4ece-8430-1c1ba3fe5044-frr-startup\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.620059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/229e5897-0e63-4b65-8142-77d97ef63ca3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.627905 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfgz\" (UniqueName: \"kubernetes.io/projected/229e5897-0e63-4b65-8142-77d97ef63ca3-kube-api-access-bbfgz\") pod \"frr-k8s-webhook-server-7df86c4f6c-kj46g\" (UID: \"229e5897-0e63-4b65-8142-77d97ef63ca3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.632383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsfw\" (UniqueName: \"kubernetes.io/projected/2602e38b-af1a-4ece-8430-1c1ba3fe5044-kube-api-access-qfsfw\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.687090 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711685 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-metrics-certs\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711756 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metrics-certs\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metallb-excludel2\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-kube-api-access-49s6r\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711825 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-cert\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.711929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x8n9\" (UniqueName: \"kubernetes.io/projected/2516854f-e7b5-4af2-a473-72ad1644043a-kube-api-access-4x8n9\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.713922 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:12:25 crc kubenswrapper[4740]: E0130 16:12:25.714012 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist podName:d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9 nodeName:}" failed. No retries permitted until 2026-01-30 16:12:26.213992789 +0000 UTC m=+994.851055388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist") pod "speaker-lbsrp" (UID: "d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9") : secret "metallb-memberlist" not found Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.714866 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metallb-excludel2\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.716896 4740 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.717093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-metrics-certs\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.718673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-metrics-certs\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.734736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2516854f-e7b5-4af2-a473-72ad1644043a-cert\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.738798 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-kube-api-access-49s6r\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.742332 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x8n9\" (UniqueName: \"kubernetes.io/projected/2516854f-e7b5-4af2-a473-72ad1644043a-kube-api-access-4x8n9\") pod \"controller-6968d8fdc4-spzrl\" (UID: \"2516854f-e7b5-4af2-a473-72ad1644043a\") " pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:25 crc kubenswrapper[4740]: I0130 16:12:25.818272 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.064224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-spzrl"] Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.128462 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.130425 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g"] Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.139303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2602e38b-af1a-4ece-8430-1c1ba3fe5044-metrics-certs\") pod \"frr-k8s-bmfdq\" (UID: \"2602e38b-af1a-4ece-8430-1c1ba3fe5044\") " pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.229767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:26 crc kubenswrapper[4740]: E0130 16:12:26.229910 4740 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 16:12:26 crc kubenswrapper[4740]: E0130 16:12:26.229987 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist podName:d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9 nodeName:}" failed. No retries permitted until 2026-01-30 16:12:27.229969163 +0000 UTC m=+995.867031762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist") pod "speaker-lbsrp" (UID: "d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9") : secret "metallb-memberlist" not found Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.300165 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.532946 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"1b907e8e08374c6ddeea80d6aa133eb15573e0376228966dc4c8b193c0faba39"} Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.534939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" event={"ID":"229e5897-0e63-4b65-8142-77d97ef63ca3","Type":"ContainerStarted","Data":"30e1f0c9d3d9fe51ea46232d7d337a4e52737f0eca879c6873679571080ba275"} Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.536711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-spzrl" event={"ID":"2516854f-e7b5-4af2-a473-72ad1644043a","Type":"ContainerStarted","Data":"ffc0364d9261234101baa7fb9d451924d884dcce7f39d990120bcaef70a7a24e"} Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.536746 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-spzrl" event={"ID":"2516854f-e7b5-4af2-a473-72ad1644043a","Type":"ContainerStarted","Data":"bcdf6caf919ff888bad8a00eb997ed32fac15af92220c57066e64e665074e7d0"} Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.536761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-spzrl" event={"ID":"2516854f-e7b5-4af2-a473-72ad1644043a","Type":"ContainerStarted","Data":"f0795c6767a64cbfbdf603403420584ae2bc87c0a598a5eb0f19b968f357a691"} Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.537579 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:26 crc kubenswrapper[4740]: I0130 16:12:26.559628 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-spzrl" podStartSLOduration=1.5596021580000001 podStartE2EDuration="1.559602158s" podCreationTimestamp="2026-01-30 16:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:12:26.553060215 +0000 UTC m=+995.190122814" watchObservedRunningTime="2026-01-30 16:12:26.559602158 +0000 UTC m=+995.196664767" Jan 30 16:12:27 crc kubenswrapper[4740]: I0130 16:12:27.243702 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:27 crc kubenswrapper[4740]: I0130 16:12:27.249282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9-memberlist\") pod \"speaker-lbsrp\" (UID: \"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9\") " pod="metallb-system/speaker-lbsrp" Jan 30 16:12:27 crc kubenswrapper[4740]: I0130 16:12:27.296100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lbsrp" Jan 30 16:12:27 crc kubenswrapper[4740]: W0130 16:12:27.323465 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a72191_5c19_4a76_bc9d_6d5a2d07e8d9.slice/crio-fb2954007d7e96e202563a817887e183ee6ba38a05cf010c4bbf8d0d39b93348 WatchSource:0}: Error finding container fb2954007d7e96e202563a817887e183ee6ba38a05cf010c4bbf8d0d39b93348: Status 404 returned error can't find the container with id fb2954007d7e96e202563a817887e183ee6ba38a05cf010c4bbf8d0d39b93348 Jan 30 16:12:27 crc kubenswrapper[4740]: I0130 16:12:27.546443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbsrp" event={"ID":"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9","Type":"ContainerStarted","Data":"fb2954007d7e96e202563a817887e183ee6ba38a05cf010c4bbf8d0d39b93348"} Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.561581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbsrp" event={"ID":"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9","Type":"ContainerStarted","Data":"08d8b6e1acdd9854235ffe7d58a5acd667144f4f25b37122b09ba7325b0f71a9"} Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.562057 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lbsrp" event={"ID":"d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9","Type":"ContainerStarted","Data":"cfcb97045cc449f036095503ea78c28192379c299acb78918010ffac03355532"} Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.562080 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lbsrp" Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.583907 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lbsrp" podStartSLOduration=3.5838860869999998 podStartE2EDuration="3.583886087s" podCreationTimestamp="2026-01-30 16:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:12:28.583461317 +0000 UTC m=+997.220523916" watchObservedRunningTime="2026-01-30 16:12:28.583886087 +0000 UTC m=+997.220948676" Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.591629 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.591840 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:28 crc kubenswrapper[4740]: I0130 16:12:28.682256 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:29 crc kubenswrapper[4740]: I0130 16:12:29.659598 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:29 crc kubenswrapper[4740]: I0130 16:12:29.730537 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:31 crc kubenswrapper[4740]: I0130 16:12:31.594790 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jdfxs" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="registry-server" containerID="cri-o://7371610d072920c8577fc30d62eaeaac2b50b7f4afbf1c92032a575d118aae70" gracePeriod=2 Jan 30 16:12:32 crc kubenswrapper[4740]: I0130 16:12:32.608207 4740 generic.go:334] "Generic (PLEG): container finished" podID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerID="7371610d072920c8577fc30d62eaeaac2b50b7f4afbf1c92032a575d118aae70" exitCode=0 Jan 30 16:12:32 crc kubenswrapper[4740]: I0130 16:12:32.608313 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerDied","Data":"7371610d072920c8577fc30d62eaeaac2b50b7f4afbf1c92032a575d118aae70"} Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.747724 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.950452 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities\") pod \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.950989 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgn9z\" (UniqueName: \"kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z\") pod \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.951036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content\") pod \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\" (UID: \"28d0994f-bf3b-4c5f-95f3-fe894a765dda\") " Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.954065 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities" (OuterVolumeSpecName: "utilities") pod "28d0994f-bf3b-4c5f-95f3-fe894a765dda" (UID: "28d0994f-bf3b-4c5f-95f3-fe894a765dda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:12:35 crc kubenswrapper[4740]: I0130 16:12:35.957208 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z" (OuterVolumeSpecName: "kube-api-access-mgn9z") pod "28d0994f-bf3b-4c5f-95f3-fe894a765dda" (UID: "28d0994f-bf3b-4c5f-95f3-fe894a765dda"). InnerVolumeSpecName "kube-api-access-mgn9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.002026 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28d0994f-bf3b-4c5f-95f3-fe894a765dda" (UID: "28d0994f-bf3b-4c5f-95f3-fe894a765dda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.052614 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgn9z\" (UniqueName: \"kubernetes.io/projected/28d0994f-bf3b-4c5f-95f3-fe894a765dda-kube-api-access-mgn9z\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.052958 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.053041 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d0994f-bf3b-4c5f-95f3-fe894a765dda-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.637755 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" event={"ID":"229e5897-0e63-4b65-8142-77d97ef63ca3","Type":"ContainerStarted","Data":"6ed146e20e788f0fabdef852e5bb637bf91f3d528f6ea79113b4ba033c2832bd"} Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.638192 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.640170 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdfxs" event={"ID":"28d0994f-bf3b-4c5f-95f3-fe894a765dda","Type":"ContainerDied","Data":"488cd143ee285a0a5513d703017e23204ce58d54c0a48c81130cd3bbcde099b3"} Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.640213 4740 scope.go:117] "RemoveContainer" containerID="7371610d072920c8577fc30d62eaeaac2b50b7f4afbf1c92032a575d118aae70" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.640218 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdfxs" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.642184 4740 generic.go:334] "Generic (PLEG): container finished" podID="2602e38b-af1a-4ece-8430-1c1ba3fe5044" containerID="910e5e8d357e0c929c0faf28ef7aae64b2ef16b5656dcdce5383db0fabac2c08" exitCode=0 Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.642221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerDied","Data":"910e5e8d357e0c929c0faf28ef7aae64b2ef16b5656dcdce5383db0fabac2c08"} Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.662675 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" podStartSLOduration=2.039054913 podStartE2EDuration="11.662653668s" podCreationTimestamp="2026-01-30 16:12:25 +0000 UTC" firstStartedPulling="2026-01-30 16:12:26.149673394 +0000 UTC m=+994.786735993" lastFinishedPulling="2026-01-30 16:12:35.773272149 +0000 UTC m=+1004.410334748" observedRunningTime="2026-01-30 16:12:36.652025714 +0000 UTC m=+1005.289088323" watchObservedRunningTime="2026-01-30 16:12:36.662653668 +0000 UTC m=+1005.299716277" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.667525 4740 scope.go:117] "RemoveContainer" containerID="9dc193355bed17ba089651c4ec2e3c04631792d1ea71066a41af158655f3f7be" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.695240 4740 scope.go:117] "RemoveContainer" containerID="1f50f89f915289a726a829ebcbef1f198182e649161166b4b758996a797610a6" Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.713129 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:36 crc kubenswrapper[4740]: I0130 16:12:36.720718 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jdfxs"] Jan 30 16:12:37 crc kubenswrapper[4740]: I0130 16:12:37.303720 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lbsrp" Jan 30 16:12:37 crc kubenswrapper[4740]: I0130 16:12:37.343627 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" path="/var/lib/kubelet/pods/28d0994f-bf3b-4c5f-95f3-fe894a765dda/volumes" Jan 30 16:12:37 crc kubenswrapper[4740]: I0130 16:12:37.651835 4740 generic.go:334] "Generic (PLEG): container finished" podID="2602e38b-af1a-4ece-8430-1c1ba3fe5044" containerID="1fe40db3ad415949faf7ba5fae9a344701e254ff079a95ec63b08f9c4ac129b0" exitCode=0 Jan 30 16:12:37 crc kubenswrapper[4740]: I0130 16:12:37.652644 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerDied","Data":"1fe40db3ad415949faf7ba5fae9a344701e254ff079a95ec63b08f9c4ac129b0"} Jan 30 16:12:38 crc kubenswrapper[4740]: I0130 16:12:38.663127 4740 generic.go:334] "Generic (PLEG): container finished" podID="2602e38b-af1a-4ece-8430-1c1ba3fe5044" containerID="383d387aacd142e5570035d069a755d3d17dac30e9a377f33ea3fa02e1ba4c84" exitCode=0 Jan 30 16:12:38 crc kubenswrapper[4740]: I0130 16:12:38.663191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerDied","Data":"383d387aacd142e5570035d069a755d3d17dac30e9a377f33ea3fa02e1ba4c84"} Jan 30 16:12:39 crc kubenswrapper[4740]: I0130 16:12:39.679047 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"470ce2c3fe6feeef32930272542fe2ebfcca8820e8fd34298e53ccf0eaf693d8"} Jan 30 16:12:39 crc kubenswrapper[4740]: I0130 16:12:39.679485 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"38f20b3e0a209ee46ba3120ebffd034c9c98b556b1e1ee0451711504fa40b1c0"} Jan 30 16:12:39 crc kubenswrapper[4740]: I0130 16:12:39.679499 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"fdbd3a1a9ff6ea66bd10e7a79186d04f933f46046f4212ea41d493018b1b3497"} Jan 30 16:12:39 crc kubenswrapper[4740]: I0130 16:12:39.679508 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"d6cd8dfd8353cd07d7674530aaf095cb78d4f1e38c73028fabdbb10c09dc8152"} Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.196191 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:40 crc kubenswrapper[4740]: E0130 16:12:40.196558 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="extract-content" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.196576 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="extract-content" Jan 30 16:12:40 crc kubenswrapper[4740]: E0130 16:12:40.196590 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="extract-utilities" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.196599 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="extract-utilities" Jan 30 16:12:40 crc kubenswrapper[4740]: E0130 16:12:40.196632 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="registry-server" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.196641 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="registry-server" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.196820 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d0994f-bf3b-4c5f-95f3-fe894a765dda" containerName="registry-server" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.197464 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.199686 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-v78n7" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.201478 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.202765 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.209842 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.226206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cln\" (UniqueName: \"kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln\") pod \"openstack-operator-index-85mmg\" (UID: \"f127bc67-fa14-42b3-8765-e14964a74fd5\") " pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.328165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9cln\" (UniqueName: \"kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln\") pod \"openstack-operator-index-85mmg\" (UID: \"f127bc67-fa14-42b3-8765-e14964a74fd5\") " pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.358304 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9cln\" (UniqueName: \"kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln\") pod \"openstack-operator-index-85mmg\" (UID: \"f127bc67-fa14-42b3-8765-e14964a74fd5\") " pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.541328 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.693071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"e31baffaac3403c354a83d0734ebae213a09487de4143d69d523be90d7cce13c"} Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.693134 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bmfdq" event={"ID":"2602e38b-af1a-4ece-8430-1c1ba3fe5044","Type":"ContainerStarted","Data":"d0e0abdeea106e618c65e5517d43f6742382bdfee7093571c36c84dbf57a9110"} Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.693531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.979577 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bmfdq" podStartSLOduration=6.601580764 podStartE2EDuration="15.979557835s" podCreationTimestamp="2026-01-30 16:12:25 +0000 UTC" firstStartedPulling="2026-01-30 16:12:26.418511936 +0000 UTC m=+995.055574535" lastFinishedPulling="2026-01-30 16:12:35.796489007 +0000 UTC m=+1004.433551606" observedRunningTime="2026-01-30 16:12:40.722887406 +0000 UTC m=+1009.359950005" watchObservedRunningTime="2026-01-30 16:12:40.979557835 +0000 UTC m=+1009.616620434" Jan 30 16:12:40 crc kubenswrapper[4740]: I0130 16:12:40.982844 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:41 crc kubenswrapper[4740]: I0130 16:12:41.301421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:41 crc kubenswrapper[4740]: I0130 16:12:41.369344 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:12:41 crc kubenswrapper[4740]: I0130 16:12:41.701149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85mmg" event={"ID":"f127bc67-fa14-42b3-8765-e14964a74fd5","Type":"ContainerStarted","Data":"7a5d609b0beba5f52fafaeec7130fb69c0f9f9a9bd996c1e220b1adcee5df7a9"} Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.162587 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.776924 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rwqps"] Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.779065 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.789463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rwqps"] Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.789834 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqclg\" (UniqueName: \"kubernetes.io/projected/6d98558b-10cb-4d22-ac8e-4db35ad5b364-kube-api-access-cqclg\") pod \"openstack-operator-index-rwqps\" (UID: \"6d98558b-10cb-4d22-ac8e-4db35ad5b364\") " pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.891134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqclg\" (UniqueName: \"kubernetes.io/projected/6d98558b-10cb-4d22-ac8e-4db35ad5b364-kube-api-access-cqclg\") pod \"openstack-operator-index-rwqps\" (UID: \"6d98558b-10cb-4d22-ac8e-4db35ad5b364\") " pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:43 crc kubenswrapper[4740]: I0130 16:12:43.925291 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqclg\" (UniqueName: \"kubernetes.io/projected/6d98558b-10cb-4d22-ac8e-4db35ad5b364-kube-api-access-cqclg\") pod \"openstack-operator-index-rwqps\" (UID: \"6d98558b-10cb-4d22-ac8e-4db35ad5b364\") " pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:44 crc kubenswrapper[4740]: I0130 16:12:44.113785 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:44 crc kubenswrapper[4740]: I0130 16:12:44.544220 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rwqps"] Jan 30 16:12:44 crc kubenswrapper[4740]: W0130 16:12:44.556143 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d98558b_10cb_4d22_ac8e_4db35ad5b364.slice/crio-ceed2a4754a5604fbdeb81a2371f77fc87fd066b061d239aa72bc80257830583 WatchSource:0}: Error finding container ceed2a4754a5604fbdeb81a2371f77fc87fd066b061d239aa72bc80257830583: Status 404 returned error can't find the container with id ceed2a4754a5604fbdeb81a2371f77fc87fd066b061d239aa72bc80257830583 Jan 30 16:12:44 crc kubenswrapper[4740]: I0130 16:12:44.723426 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwqps" event={"ID":"6d98558b-10cb-4d22-ac8e-4db35ad5b364","Type":"ContainerStarted","Data":"ceed2a4754a5604fbdeb81a2371f77fc87fd066b061d239aa72bc80257830583"} Jan 30 16:12:45 crc kubenswrapper[4740]: I0130 16:12:45.692492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-kj46g" Jan 30 16:12:45 crc kubenswrapper[4740]: I0130 16:12:45.826069 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-spzrl" Jan 30 16:12:50 crc kubenswrapper[4740]: I0130 16:12:50.770968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwqps" event={"ID":"6d98558b-10cb-4d22-ac8e-4db35ad5b364","Type":"ContainerStarted","Data":"03b0043e3eb2deb6eb1419b7c401f1729cd19db37933dd656abad5cf6426cd71"} Jan 30 16:12:50 crc kubenswrapper[4740]: I0130 16:12:50.775237 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85mmg" event={"ID":"f127bc67-fa14-42b3-8765-e14964a74fd5","Type":"ContainerStarted","Data":"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5"} Jan 30 16:12:50 crc kubenswrapper[4740]: I0130 16:12:50.775544 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-85mmg" podUID="f127bc67-fa14-42b3-8765-e14964a74fd5" containerName="registry-server" containerID="cri-o://d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5" gracePeriod=2 Jan 30 16:12:50 crc kubenswrapper[4740]: I0130 16:12:50.797317 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rwqps" podStartSLOduration=2.362520238 podStartE2EDuration="7.797290752s" podCreationTimestamp="2026-01-30 16:12:43 +0000 UTC" firstStartedPulling="2026-01-30 16:12:44.560274398 +0000 UTC m=+1013.197336987" lastFinishedPulling="2026-01-30 16:12:49.995044912 +0000 UTC m=+1018.632107501" observedRunningTime="2026-01-30 16:12:50.791671472 +0000 UTC m=+1019.428734071" watchObservedRunningTime="2026-01-30 16:12:50.797290752 +0000 UTC m=+1019.434353351" Jan 30 16:12:50 crc kubenswrapper[4740]: I0130 16:12:50.814548 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-85mmg" podStartSLOduration=1.8095773670000002 podStartE2EDuration="10.814525621s" podCreationTimestamp="2026-01-30 16:12:40 +0000 UTC" firstStartedPulling="2026-01-30 16:12:40.999985084 +0000 UTC m=+1009.637047683" lastFinishedPulling="2026-01-30 16:12:50.004933338 +0000 UTC m=+1018.641995937" observedRunningTime="2026-01-30 16:12:50.813808953 +0000 UTC m=+1019.450871572" watchObservedRunningTime="2026-01-30 16:12:50.814525621 +0000 UTC m=+1019.451588220" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.229413 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.422586 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9cln\" (UniqueName: \"kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln\") pod \"f127bc67-fa14-42b3-8765-e14964a74fd5\" (UID: \"f127bc67-fa14-42b3-8765-e14964a74fd5\") " Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.433453 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln" (OuterVolumeSpecName: "kube-api-access-c9cln") pod "f127bc67-fa14-42b3-8765-e14964a74fd5" (UID: "f127bc67-fa14-42b3-8765-e14964a74fd5"). InnerVolumeSpecName "kube-api-access-c9cln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.525181 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9cln\" (UniqueName: \"kubernetes.io/projected/f127bc67-fa14-42b3-8765-e14964a74fd5-kube-api-access-c9cln\") on node \"crc\" DevicePath \"\"" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.783765 4740 generic.go:334] "Generic (PLEG): container finished" podID="f127bc67-fa14-42b3-8765-e14964a74fd5" containerID="d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5" exitCode=0 Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.783827 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-85mmg" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.783871 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85mmg" event={"ID":"f127bc67-fa14-42b3-8765-e14964a74fd5","Type":"ContainerDied","Data":"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5"} Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.783941 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-85mmg" event={"ID":"f127bc67-fa14-42b3-8765-e14964a74fd5","Type":"ContainerDied","Data":"7a5d609b0beba5f52fafaeec7130fb69c0f9f9a9bd996c1e220b1adcee5df7a9"} Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.783976 4740 scope.go:117] "RemoveContainer" containerID="d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.814709 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.822098 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-85mmg"] Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.822523 4740 scope.go:117] "RemoveContainer" containerID="d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5" Jan 30 16:12:51 crc kubenswrapper[4740]: E0130 16:12:51.823162 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5\": container with ID starting with d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5 not found: ID does not exist" containerID="d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5" Jan 30 16:12:51 crc kubenswrapper[4740]: I0130 16:12:51.823213 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5"} err="failed to get container status \"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5\": rpc error: code = NotFound desc = could not find container \"d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5\": container with ID starting with d6d7d7e1b92b7aea8e34f9de602cbbe672f8dc9d9eafbcf78a3f3fc518bb29a5 not found: ID does not exist" Jan 30 16:12:53 crc kubenswrapper[4740]: I0130 16:12:53.344682 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f127bc67-fa14-42b3-8765-e14964a74fd5" path="/var/lib/kubelet/pods/f127bc67-fa14-42b3-8765-e14964a74fd5/volumes" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.114163 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.114698 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.152229 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.455688 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.455789 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.455854 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.456654 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.456732 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a" gracePeriod=600 Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.812788 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a" exitCode=0 Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.812874 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a"} Jan 30 16:12:54 crc kubenswrapper[4740]: I0130 16:12:54.813331 4740 scope.go:117] "RemoveContainer" containerID="d64453654b97af2a24f5bc387099a48fbcbd73b8814c94ebe9bbc445d2531865" Jan 30 16:12:55 crc kubenswrapper[4740]: I0130 16:12:55.823978 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11"} Jan 30 16:12:56 crc kubenswrapper[4740]: I0130 16:12:56.309402 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bmfdq" Jan 30 16:13:04 crc kubenswrapper[4740]: I0130 16:13:04.145112 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rwqps" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.381784 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9"] Jan 30 16:13:10 crc kubenswrapper[4740]: E0130 16:13:10.382975 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f127bc67-fa14-42b3-8765-e14964a74fd5" containerName="registry-server" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.382998 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127bc67-fa14-42b3-8765-e14964a74fd5" containerName="registry-server" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.383198 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f127bc67-fa14-42b3-8765-e14964a74fd5" containerName="registry-server" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.384851 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.387034 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2tn84" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.387600 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9"] Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.430828 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.431319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5h7s\" (UniqueName: \"kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.434617 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.536646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5h7s\" (UniqueName: \"kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.536770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.536863 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.538035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.538226 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.564135 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5h7s\" (UniqueName: \"kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s\") pod \"1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.741898 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:10 crc kubenswrapper[4740]: I0130 16:13:10.971757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9"] Jan 30 16:13:10 crc kubenswrapper[4740]: W0130 16:13:10.979214 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode62c1d15_4087_4ac5_85e0_7982f249c1a3.slice/crio-e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6 WatchSource:0}: Error finding container e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6: Status 404 returned error can't find the container with id e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6 Jan 30 16:13:11 crc kubenswrapper[4740]: I0130 16:13:11.951251 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerStarted","Data":"8bb14fbc87cb371540be94a0e576f42f37ea85902ae994c998c6024c6f364b81"} Jan 30 16:13:11 crc kubenswrapper[4740]: I0130 16:13:11.951705 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerStarted","Data":"e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6"} Jan 30 16:13:12 crc kubenswrapper[4740]: I0130 16:13:12.958957 4740 generic.go:334] "Generic (PLEG): container finished" podID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerID="8bb14fbc87cb371540be94a0e576f42f37ea85902ae994c998c6024c6f364b81" exitCode=0 Jan 30 16:13:12 crc kubenswrapper[4740]: I0130 16:13:12.959359 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerDied","Data":"8bb14fbc87cb371540be94a0e576f42f37ea85902ae994c998c6024c6f364b81"} Jan 30 16:13:12 crc kubenswrapper[4740]: I0130 16:13:12.961481 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:13:13 crc kubenswrapper[4740]: I0130 16:13:13.968062 4740 generic.go:334] "Generic (PLEG): container finished" podID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerID="e8ee322832069bd0dd8a693a16b5b65912f1edb4f47a9c357b56b8e7bda20caf" exitCode=0 Jan 30 16:13:13 crc kubenswrapper[4740]: I0130 16:13:13.968169 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerDied","Data":"e8ee322832069bd0dd8a693a16b5b65912f1edb4f47a9c357b56b8e7bda20caf"} Jan 30 16:13:14 crc kubenswrapper[4740]: I0130 16:13:14.984806 4740 generic.go:334] "Generic (PLEG): container finished" podID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerID="be4ce8645c3a4509ec39a758d29a5188d91280c69121da9ba69d077b88460dac" exitCode=0 Jan 30 16:13:14 crc kubenswrapper[4740]: I0130 16:13:14.984864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerDied","Data":"be4ce8645c3a4509ec39a758d29a5188d91280c69121da9ba69d077b88460dac"} Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.253842 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.427002 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util\") pod \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.427618 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle\") pod \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.427688 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5h7s\" (UniqueName: \"kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s\") pod \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\" (UID: \"e62c1d15-4087-4ac5-85e0-7982f249c1a3\") " Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.429243 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle" (OuterVolumeSpecName: "bundle") pod "e62c1d15-4087-4ac5-85e0-7982f249c1a3" (UID: "e62c1d15-4087-4ac5-85e0-7982f249c1a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.436760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s" (OuterVolumeSpecName: "kube-api-access-h5h7s") pod "e62c1d15-4087-4ac5-85e0-7982f249c1a3" (UID: "e62c1d15-4087-4ac5-85e0-7982f249c1a3"). InnerVolumeSpecName "kube-api-access-h5h7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.441489 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util" (OuterVolumeSpecName: "util") pod "e62c1d15-4087-4ac5-85e0-7982f249c1a3" (UID: "e62c1d15-4087-4ac5-85e0-7982f249c1a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.529032 4740 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.529068 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5h7s\" (UniqueName: \"kubernetes.io/projected/e62c1d15-4087-4ac5-85e0-7982f249c1a3-kube-api-access-h5h7s\") on node \"crc\" DevicePath \"\"" Jan 30 16:13:16 crc kubenswrapper[4740]: I0130 16:13:16.529079 4740 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e62c1d15-4087-4ac5-85e0-7982f249c1a3-util\") on node \"crc\" DevicePath \"\"" Jan 30 16:13:17 crc kubenswrapper[4740]: I0130 16:13:17.005697 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" event={"ID":"e62c1d15-4087-4ac5-85e0-7982f249c1a3","Type":"ContainerDied","Data":"e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6"} Jan 30 16:13:17 crc kubenswrapper[4740]: I0130 16:13:17.005766 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45105f2c017bff3f2759b374bd367b52393c3e01c2e5393dc43eeb7a7a564e6" Jan 30 16:13:17 crc kubenswrapper[4740]: I0130 16:13:17.005794 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.562725 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v"] Jan 30 16:13:22 crc kubenswrapper[4740]: E0130 16:13:22.563517 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="pull" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.563535 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="pull" Jan 30 16:13:22 crc kubenswrapper[4740]: E0130 16:13:22.563549 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="extract" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.563556 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="extract" Jan 30 16:13:22 crc kubenswrapper[4740]: E0130 16:13:22.563576 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="util" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.563585 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="util" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.563739 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62c1d15-4087-4ac5-85e0-7982f249c1a3" containerName="extract" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.564409 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.573483 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jgkx8" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.594296 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v"] Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.642268 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsj7m\" (UniqueName: \"kubernetes.io/projected/72ae6a1c-defc-4fa0-8526-6fa59b0b2138-kube-api-access-nsj7m\") pod \"openstack-operator-controller-init-8d66f78b7-26k2v\" (UID: \"72ae6a1c-defc-4fa0-8526-6fa59b0b2138\") " pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.743954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsj7m\" (UniqueName: \"kubernetes.io/projected/72ae6a1c-defc-4fa0-8526-6fa59b0b2138-kube-api-access-nsj7m\") pod \"openstack-operator-controller-init-8d66f78b7-26k2v\" (UID: \"72ae6a1c-defc-4fa0-8526-6fa59b0b2138\") " pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.766922 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsj7m\" (UniqueName: \"kubernetes.io/projected/72ae6a1c-defc-4fa0-8526-6fa59b0b2138-kube-api-access-nsj7m\") pod \"openstack-operator-controller-init-8d66f78b7-26k2v\" (UID: \"72ae6a1c-defc-4fa0-8526-6fa59b0b2138\") " pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:22 crc kubenswrapper[4740]: I0130 16:13:22.884250 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:23 crc kubenswrapper[4740]: I0130 16:13:23.184532 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v"] Jan 30 16:13:24 crc kubenswrapper[4740]: I0130 16:13:24.077531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" event={"ID":"72ae6a1c-defc-4fa0-8526-6fa59b0b2138","Type":"ContainerStarted","Data":"f65300715f88ca5e5ece90922ece81ce8a2ea05cc8c319f7337b42e1635b31a7"} Jan 30 16:13:29 crc kubenswrapper[4740]: I0130 16:13:29.118438 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" event={"ID":"72ae6a1c-defc-4fa0-8526-6fa59b0b2138","Type":"ContainerStarted","Data":"884a3b06f7be4574b6216307ec33e7ae699b78e4860170e3f9ad32ec393c7de1"} Jan 30 16:13:29 crc kubenswrapper[4740]: I0130 16:13:29.119177 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:13:29 crc kubenswrapper[4740]: I0130 16:13:29.170891 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" podStartSLOduration=2.289976674 podStartE2EDuration="7.170868501s" podCreationTimestamp="2026-01-30 16:13:22 +0000 UTC" firstStartedPulling="2026-01-30 16:13:23.198222489 +0000 UTC m=+1051.835285088" lastFinishedPulling="2026-01-30 16:13:28.079114316 +0000 UTC m=+1056.716176915" observedRunningTime="2026-01-30 16:13:29.163378875 +0000 UTC m=+1057.800441494" watchObservedRunningTime="2026-01-30 16:13:29.170868501 +0000 UTC m=+1057.807931100" Jan 30 16:13:42 crc kubenswrapper[4740]: I0130 16:13:42.888676 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8d66f78b7-26k2v" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.682388 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.684052 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.687111 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6j8cr" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.691755 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.693152 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.694867 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fm98m" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.720819 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.743333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4pp\" (UniqueName: \"kubernetes.io/projected/4ffa4d95-fc8d-4352-9bb3-b74038d53453-kube-api-access-tn4pp\") pod \"cinder-operator-controller-manager-8d874c8fc-xsqtm\" (UID: \"4ffa4d95-fc8d-4352-9bb3-b74038d53453\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.743424 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tqt\" (UniqueName: \"kubernetes.io/projected/b3f3f690-263c-406b-9651-b1d548a73010-kube-api-access-r2tqt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tzdc2\" (UID: \"b3f3f690-263c-406b-9651-b1d548a73010\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.748270 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.749712 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.754372 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-t8vjk" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.766239 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.774887 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.777174 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2d6qd" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.785085 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.805081 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.814704 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.844112 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.845204 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.847586 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4pp\" (UniqueName: \"kubernetes.io/projected/4ffa4d95-fc8d-4352-9bb3-b74038d53453-kube-api-access-tn4pp\") pod \"cinder-operator-controller-manager-8d874c8fc-xsqtm\" (UID: \"4ffa4d95-fc8d-4352-9bb3-b74038d53453\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.847618 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh9bv\" (UniqueName: \"kubernetes.io/projected/de27448d-0b23-4bbb-81b2-7818361e53bf-kube-api-access-jh9bv\") pod \"glance-operator-controller-manager-8886f4c47-2cj65\" (UID: \"de27448d-0b23-4bbb-81b2-7818361e53bf\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.847662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbq8\" (UniqueName: \"kubernetes.io/projected/7a1d5aff-da4c-4c0e-9616-44da3511eef2-kube-api-access-qzbq8\") pod \"designate-operator-controller-manager-6d9697b7f4-q652d\" (UID: \"7a1d5aff-da4c-4c0e-9616-44da3511eef2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.847690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tqt\" (UniqueName: \"kubernetes.io/projected/b3f3f690-263c-406b-9651-b1d548a73010-kube-api-access-r2tqt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tzdc2\" (UID: \"b3f3f690-263c-406b-9651-b1d548a73010\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.847732 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzsh\" (UniqueName: \"kubernetes.io/projected/9fa5493f-2e76-4fda-9a43-4d8e7828f2a7-kube-api-access-cgzsh\") pod \"heat-operator-controller-manager-69d6db494d-jjtfm\" (UID: \"9fa5493f-2e76-4fda-9a43-4d8e7828f2a7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.852648 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-5svmg" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.853265 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.861267 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.862692 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.866057 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jzgjd" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.900740 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4pp\" (UniqueName: \"kubernetes.io/projected/4ffa4d95-fc8d-4352-9bb3-b74038d53453-kube-api-access-tn4pp\") pod \"cinder-operator-controller-manager-8d874c8fc-xsqtm\" (UID: \"4ffa4d95-fc8d-4352-9bb3-b74038d53453\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.909568 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.926194 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tqt\" (UniqueName: \"kubernetes.io/projected/b3f3f690-263c-406b-9651-b1d548a73010-kube-api-access-r2tqt\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tzdc2\" (UID: \"b3f3f690-263c-406b-9651-b1d548a73010\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.931731 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h"] Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.947617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.954884 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.955174 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qthnb" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.974249 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh9bv\" (UniqueName: \"kubernetes.io/projected/de27448d-0b23-4bbb-81b2-7818361e53bf-kube-api-access-jh9bv\") pod \"glance-operator-controller-manager-8886f4c47-2cj65\" (UID: \"de27448d-0b23-4bbb-81b2-7818361e53bf\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.975594 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzbq8\" (UniqueName: \"kubernetes.io/projected/7a1d5aff-da4c-4c0e-9616-44da3511eef2-kube-api-access-qzbq8\") pod \"designate-operator-controller-manager-6d9697b7f4-q652d\" (UID: \"7a1d5aff-da4c-4c0e-9616-44da3511eef2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.975828 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgzsh\" (UniqueName: \"kubernetes.io/projected/9fa5493f-2e76-4fda-9a43-4d8e7828f2a7-kube-api-access-cgzsh\") pod \"heat-operator-controller-manager-69d6db494d-jjtfm\" (UID: \"9fa5493f-2e76-4fda-9a43-4d8e7828f2a7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:02 crc kubenswrapper[4740]: I0130 16:14:02.980907 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:02.999538 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.014734 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5qv2s" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.034234 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzbq8\" (UniqueName: \"kubernetes.io/projected/7a1d5aff-da4c-4c0e-9616-44da3511eef2-kube-api-access-qzbq8\") pod \"designate-operator-controller-manager-6d9697b7f4-q652d\" (UID: \"7a1d5aff-da4c-4c0e-9616-44da3511eef2\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.034499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgzsh\" (UniqueName: \"kubernetes.io/projected/9fa5493f-2e76-4fda-9a43-4d8e7828f2a7-kube-api-access-cgzsh\") pod \"heat-operator-controller-manager-69d6db494d-jjtfm\" (UID: \"9fa5493f-2e76-4fda-9a43-4d8e7828f2a7\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.043842 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.051132 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.052942 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.060334 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.060712 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xx977" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.062003 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh9bv\" (UniqueName: \"kubernetes.io/projected/de27448d-0b23-4bbb-81b2-7818361e53bf-kube-api-access-jh9bv\") pod \"glance-operator-controller-manager-8886f4c47-2cj65\" (UID: \"de27448d-0b23-4bbb-81b2-7818361e53bf\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.070862 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.072153 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.076786 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vwhn9" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.081001 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.081190 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkm5n\" (UniqueName: \"kubernetes.io/projected/97e430a6-ad51-4e80-999e-75e568b1d6b6-kube-api-access-pkm5n\") pod \"horizon-operator-controller-manager-5fb775575f-g8sm9\" (UID: \"97e430a6-ad51-4e80-999e-75e568b1d6b6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.081304 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5hl\" (UniqueName: \"kubernetes.io/projected/736c30f6-a1e4-47aa-a6d0-713baf99ad69-kube-api-access-gz5hl\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.083179 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.084290 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.090635 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.100747 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.101292 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.110506 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.111958 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.117696 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dqdrf" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.140146 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.154674 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-v8885"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.156823 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.166959 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-hfzm8" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.171619 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.172048 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.180762 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-v8885"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.186510 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.187628 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.194688 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hj6rc" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xhp\" (UniqueName: \"kubernetes.io/projected/b9648635-827e-4a21-8890-ba8b1772d7c4-kube-api-access-k7xhp\") pod \"manila-operator-controller-manager-7dd968899f-w7jt2\" (UID: \"b9648635-827e-4a21-8890-ba8b1772d7c4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwffk\" (UniqueName: \"kubernetes.io/projected/ac86533b-0c5a-4704-b497-6e7e1114d938-kube-api-access-gwffk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-nwnsv\" (UID: \"ac86533b-0c5a-4704-b497-6e7e1114d938\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200416 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkm5n\" (UniqueName: \"kubernetes.io/projected/97e430a6-ad51-4e80-999e-75e568b1d6b6-kube-api-access-pkm5n\") pod \"horizon-operator-controller-manager-5fb775575f-g8sm9\" (UID: \"97e430a6-ad51-4e80-999e-75e568b1d6b6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200485 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5hl\" (UniqueName: \"kubernetes.io/projected/736c30f6-a1e4-47aa-a6d0-713baf99ad69-kube-api-access-gz5hl\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.200630 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5fz\" (UniqueName: \"kubernetes.io/projected/88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c-kube-api-access-5w5fz\") pod \"keystone-operator-controller-manager-84f48565d4-d4hf5\" (UID: \"88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.200845 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.200915 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert podName:736c30f6-a1e4-47aa-a6d0-713baf99ad69 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:03.700886513 +0000 UTC m=+1092.337949112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert") pod "infra-operator-controller-manager-79955696d6-6wz9h" (UID: "736c30f6-a1e4-47aa-a6d0-713baf99ad69") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.239889 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.249398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkm5n\" (UniqueName: \"kubernetes.io/projected/97e430a6-ad51-4e80-999e-75e568b1d6b6-kube-api-access-pkm5n\") pod \"horizon-operator-controller-manager-5fb775575f-g8sm9\" (UID: \"97e430a6-ad51-4e80-999e-75e568b1d6b6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.286385 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.287779 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.293797 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jb764" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.300816 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.304596 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xhp\" (UniqueName: \"kubernetes.io/projected/b9648635-827e-4a21-8890-ba8b1772d7c4-kube-api-access-k7xhp\") pod \"manila-operator-controller-manager-7dd968899f-w7jt2\" (UID: \"b9648635-827e-4a21-8890-ba8b1772d7c4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.304686 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwffk\" (UniqueName: \"kubernetes.io/projected/ac86533b-0c5a-4704-b497-6e7e1114d938-kube-api-access-gwffk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-nwnsv\" (UID: \"ac86533b-0c5a-4704-b497-6e7e1114d938\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.304733 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8zb\" (UniqueName: \"kubernetes.io/projected/c35e116f-97e5-47ec-aa40-955321cb09d5-kube-api-access-7c8zb\") pod \"mariadb-operator-controller-manager-67bf948998-lqf5n\" (UID: \"c35e116f-97e5-47ec-aa40-955321cb09d5\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.305169 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67t5n\" (UniqueName: \"kubernetes.io/projected/b82bfd4e-e72e-4941-b8aa-1baae2433217-kube-api-access-67t5n\") pod \"neutron-operator-controller-manager-585dbc889-v8885\" (UID: \"b82bfd4e-e72e-4941-b8aa-1baae2433217\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.305207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w5fz\" (UniqueName: \"kubernetes.io/projected/88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c-kube-api-access-5w5fz\") pod \"keystone-operator-controller-manager-84f48565d4-d4hf5\" (UID: \"88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.305257 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dw2\" (UniqueName: \"kubernetes.io/projected/6ba6b433-534d-4a14-9fbb-4418b1c39fd9-kube-api-access-m8dw2\") pod \"nova-operator-controller-manager-55bff696bd-bzjc5\" (UID: \"6ba6b433-534d-4a14-9fbb-4418b1c39fd9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.320667 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.322659 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.330410 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5hl\" (UniqueName: \"kubernetes.io/projected/736c30f6-a1e4-47aa-a6d0-713baf99ad69-kube-api-access-gz5hl\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.338533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwffk\" (UniqueName: \"kubernetes.io/projected/ac86533b-0c5a-4704-b497-6e7e1114d938-kube-api-access-gwffk\") pod \"ironic-operator-controller-manager-5f4b8bd54d-nwnsv\" (UID: \"ac86533b-0c5a-4704-b497-6e7e1114d938\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.348145 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.348393 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-snw7w" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.370254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w5fz\" (UniqueName: \"kubernetes.io/projected/88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c-kube-api-access-5w5fz\") pod \"keystone-operator-controller-manager-84f48565d4-d4hf5\" (UID: \"88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.384065 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xhp\" (UniqueName: \"kubernetes.io/projected/b9648635-827e-4a21-8890-ba8b1772d7c4-kube-api-access-k7xhp\") pod \"manila-operator-controller-manager-7dd968899f-w7jt2\" (UID: \"b9648635-827e-4a21-8890-ba8b1772d7c4\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.389733 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.390586 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.391188 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.391880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.391899 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.391983 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.392494 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.392715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.395241 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.401040 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-96rd6" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.401272 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-ng2dd" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.404104 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lks7n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.409951 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkq7z\" (UniqueName: \"kubernetes.io/projected/011e9da6-1efe-4002-91f3-0aa0923fa015-kube-api-access-rkq7z\") pod \"swift-operator-controller-manager-68fc8c869-92p8l\" (UID: \"011e9da6-1efe-4002-91f3-0aa0923fa015\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkwr\" (UniqueName: \"kubernetes.io/projected/b8a01322-677f-443a-83fd-6352c7523727-kube-api-access-jgkwr\") pod \"placement-operator-controller-manager-5b964cf4cd-hszqm\" (UID: \"b8a01322-677f-443a-83fd-6352c7523727\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67t5n\" (UniqueName: \"kubernetes.io/projected/b82bfd4e-e72e-4941-b8aa-1baae2433217-kube-api-access-67t5n\") pod \"neutron-operator-controller-manager-585dbc889-v8885\" (UID: \"b82bfd4e-e72e-4941-b8aa-1baae2433217\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410333 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchrs\" (UniqueName: \"kubernetes.io/projected/a7ff8a9d-40f9-4354-aa10-e7e93907a0a5-kube-api-access-kchrs\") pod \"octavia-operator-controller-manager-6687f8d877-tl627\" (UID: \"a7ff8a9d-40f9-4354-aa10-e7e93907a0a5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dw2\" (UniqueName: \"kubernetes.io/projected/6ba6b433-534d-4a14-9fbb-4418b1c39fd9-kube-api-access-m8dw2\") pod \"nova-operator-controller-manager-55bff696bd-bzjc5\" (UID: \"6ba6b433-534d-4a14-9fbb-4418b1c39fd9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410605 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljkm\" (UniqueName: \"kubernetes.io/projected/5a6574e0-d6db-4e3d-9203-c3b28694e68f-kube-api-access-bljkm\") pod \"ovn-operator-controller-manager-788c46999f-dpf55\" (UID: \"5a6574e0-d6db-4e3d-9203-c3b28694e68f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrpd8\" (UniqueName: \"kubernetes.io/projected/107dde7f-ab99-4981-ba7a-0c6756408b54-kube-api-access-lrpd8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.410880 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8zb\" (UniqueName: \"kubernetes.io/projected/c35e116f-97e5-47ec-aa40-955321cb09d5-kube-api-access-7c8zb\") pod \"mariadb-operator-controller-manager-67bf948998-lqf5n\" (UID: \"c35e116f-97e5-47ec-aa40-955321cb09d5\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.432682 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.433841 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.435972 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.437115 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.453923 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.454141 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.454286 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7xfb7" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.464091 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.465198 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.477834 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.526819 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.527652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67t5n\" (UniqueName: \"kubernetes.io/projected/b82bfd4e-e72e-4941-b8aa-1baae2433217-kube-api-access-67t5n\") pod \"neutron-operator-controller-manager-585dbc889-v8885\" (UID: \"b82bfd4e-e72e-4941-b8aa-1baae2433217\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.528375 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dw2\" (UniqueName: \"kubernetes.io/projected/6ba6b433-534d-4a14-9fbb-4418b1c39fd9-kube-api-access-m8dw2\") pod \"nova-operator-controller-manager-55bff696bd-bzjc5\" (UID: \"6ba6b433-534d-4a14-9fbb-4418b1c39fd9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.535012 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gpdzp" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.547165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.547230 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kchrs\" (UniqueName: \"kubernetes.io/projected/a7ff8a9d-40f9-4354-aa10-e7e93907a0a5-kube-api-access-kchrs\") pod \"octavia-operator-controller-manager-6687f8d877-tl627\" (UID: \"a7ff8a9d-40f9-4354-aa10-e7e93907a0a5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.550407 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.550479 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert podName:107dde7f-ab99-4981-ba7a-0c6756408b54 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:04.050459664 +0000 UTC m=+1092.687522263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" (UID: "107dde7f-ab99-4981-ba7a-0c6756408b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.590388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8zb\" (UniqueName: \"kubernetes.io/projected/c35e116f-97e5-47ec-aa40-955321cb09d5-kube-api-access-7c8zb\") pod \"mariadb-operator-controller-manager-67bf948998-lqf5n\" (UID: \"c35e116f-97e5-47ec-aa40-955321cb09d5\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.606227 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.607409 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljkm\" (UniqueName: \"kubernetes.io/projected/5a6574e0-d6db-4e3d-9203-c3b28694e68f-kube-api-access-bljkm\") pod \"ovn-operator-controller-manager-788c46999f-dpf55\" (UID: \"5a6574e0-d6db-4e3d-9203-c3b28694e68f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.607545 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrpd8\" (UniqueName: \"kubernetes.io/projected/107dde7f-ab99-4981-ba7a-0c6756408b54-kube-api-access-lrpd8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.607689 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkq7z\" (UniqueName: \"kubernetes.io/projected/011e9da6-1efe-4002-91f3-0aa0923fa015-kube-api-access-rkq7z\") pod \"swift-operator-controller-manager-68fc8c869-92p8l\" (UID: \"011e9da6-1efe-4002-91f3-0aa0923fa015\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.607858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgkwr\" (UniqueName: \"kubernetes.io/projected/b8a01322-677f-443a-83fd-6352c7523727-kube-api-access-jgkwr\") pod \"placement-operator-controller-manager-5b964cf4cd-hszqm\" (UID: \"b8a01322-677f-443a-83fd-6352c7523727\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.669829 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.699872 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6sl24"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.702735 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.712145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkq4\" (UniqueName: \"kubernetes.io/projected/82688ddf-9d92-4ff1-873b-ca5766766189-kube-api-access-tjkq4\") pod \"test-operator-controller-manager-56f8bfcd9f-gtt5t\" (UID: \"82688ddf-9d92-4ff1-873b-ca5766766189\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.712239 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.712297 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcs8\" (UniqueName: \"kubernetes.io/projected/82b9c083-1154-46de-958e-6a7726aca988-kube-api-access-5jcs8\") pod \"telemetry-operator-controller-manager-df45f6d5f-lc4fv\" (UID: \"82b9c083-1154-46de-958e-6a7726aca988\") " pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.712844 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: E0130 16:14:03.712914 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert podName:736c30f6-a1e4-47aa-a6d0-713baf99ad69 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:04.712892458 +0000 UTC m=+1093.349955057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert") pod "infra-operator-controller-manager-79955696d6-6wz9h" (UID: "736c30f6-a1e4-47aa-a6d0-713baf99ad69") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.735819 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kchrs\" (UniqueName: \"kubernetes.io/projected/a7ff8a9d-40f9-4354-aa10-e7e93907a0a5-kube-api-access-kchrs\") pod \"octavia-operator-controller-manager-6687f8d877-tl627\" (UID: \"a7ff8a9d-40f9-4354-aa10-e7e93907a0a5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.736448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgkwr\" (UniqueName: \"kubernetes.io/projected/b8a01322-677f-443a-83fd-6352c7523727-kube-api-access-jgkwr\") pod \"placement-operator-controller-manager-5b964cf4cd-hszqm\" (UID: \"b8a01322-677f-443a-83fd-6352c7523727\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.737138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkq7z\" (UniqueName: \"kubernetes.io/projected/011e9da6-1efe-4002-91f3-0aa0923fa015-kube-api-access-rkq7z\") pod \"swift-operator-controller-manager-68fc8c869-92p8l\" (UID: \"011e9da6-1efe-4002-91f3-0aa0923fa015\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.739548 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrpd8\" (UniqueName: \"kubernetes.io/projected/107dde7f-ab99-4981-ba7a-0c6756408b54-kube-api-access-lrpd8\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.751006 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.753155 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4np7l" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.772203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljkm\" (UniqueName: \"kubernetes.io/projected/5a6574e0-d6db-4e3d-9203-c3b28694e68f-kube-api-access-bljkm\") pod \"ovn-operator-controller-manager-788c46999f-dpf55\" (UID: \"5a6574e0-d6db-4e3d-9203-c3b28694e68f\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.797821 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.806218 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6sl24"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.813462 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkq4\" (UniqueName: \"kubernetes.io/projected/82688ddf-9d92-4ff1-873b-ca5766766189-kube-api-access-tjkq4\") pod \"test-operator-controller-manager-56f8bfcd9f-gtt5t\" (UID: \"82688ddf-9d92-4ff1-873b-ca5766766189\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.813554 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcs8\" (UniqueName: \"kubernetes.io/projected/82b9c083-1154-46de-958e-6a7726aca988-kube-api-access-5jcs8\") pod \"telemetry-operator-controller-manager-df45f6d5f-lc4fv\" (UID: \"82b9c083-1154-46de-958e-6a7726aca988\") " pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.813638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6bdk\" (UniqueName: \"kubernetes.io/projected/d6ebfaaf-00f6-430e-bcb2-b5041395a101-kube-api-access-c6bdk\") pod \"watcher-operator-controller-manager-564965969-6sl24\" (UID: \"d6ebfaaf-00f6-430e-bcb2-b5041395a101\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.826080 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.840487 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.841588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcs8\" (UniqueName: \"kubernetes.io/projected/82b9c083-1154-46de-958e-6a7726aca988-kube-api-access-5jcs8\") pod \"telemetry-operator-controller-manager-df45f6d5f-lc4fv\" (UID: \"82b9c083-1154-46de-958e-6a7726aca988\") " pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.845047 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hhgz2" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.845283 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.845405 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.845700 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkq4\" (UniqueName: \"kubernetes.io/projected/82688ddf-9d92-4ff1-873b-ca5766766189-kube-api-access-tjkq4\") pod \"test-operator-controller-manager-56f8bfcd9f-gtt5t\" (UID: \"82688ddf-9d92-4ff1-873b-ca5766766189\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.859037 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.915511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.915573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.915659 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6bdk\" (UniqueName: \"kubernetes.io/projected/d6ebfaaf-00f6-430e-bcb2-b5041395a101-kube-api-access-c6bdk\") pod \"watcher-operator-controller-manager-564965969-6sl24\" (UID: \"d6ebfaaf-00f6-430e-bcb2-b5041395a101\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.915694 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tw2j\" (UniqueName: \"kubernetes.io/projected/4b1298c0-d749-42f3-97c1-ad1b19db8f96-kube-api-access-8tw2j\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.927544 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.935620 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.936918 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.949780 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jzk57" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.950020 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb"] Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.966422 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6bdk\" (UniqueName: \"kubernetes.io/projected/d6ebfaaf-00f6-430e-bcb2-b5041395a101-kube-api-access-c6bdk\") pod \"watcher-operator-controller-manager-564965969-6sl24\" (UID: \"d6ebfaaf-00f6-430e-bcb2-b5041395a101\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:03 crc kubenswrapper[4740]: I0130 16:14:03.994412 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.019230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5hf\" (UniqueName: \"kubernetes.io/projected/0040ed18-716a-4452-8209-c45c497d7fae-kube-api-access-6r5hf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bmhgb\" (UID: \"0040ed18-716a-4452-8209-c45c497d7fae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.019386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.019417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.019476 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tw2j\" (UniqueName: \"kubernetes.io/projected/4b1298c0-d749-42f3-97c1-ad1b19db8f96-kube-api-access-8tw2j\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.020026 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.020082 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:04.520062003 +0000 UTC m=+1093.157124602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "metrics-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.020307 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.020340 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:04.52033036 +0000 UTC m=+1093.157392959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.037005 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.054748 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.055313 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.072685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tw2j\" (UniqueName: \"kubernetes.io/projected/4b1298c0-d749-42f3-97c1-ad1b19db8f96-kube-api-access-8tw2j\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.075445 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.100364 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.120870 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm"] Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.121407 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.121476 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert podName:107dde7f-ab99-4981-ba7a-0c6756408b54 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:05.121459157 +0000 UTC m=+1093.758521756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" (UID: "107dde7f-ab99-4981-ba7a-0c6756408b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.121254 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.121524 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r5hf\" (UniqueName: \"kubernetes.io/projected/0040ed18-716a-4452-8209-c45c497d7fae-kube-api-access-6r5hf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bmhgb\" (UID: \"0040ed18-716a-4452-8209-c45c497d7fae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.146596 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r5hf\" (UniqueName: \"kubernetes.io/projected/0040ed18-716a-4452-8209-c45c497d7fae-kube-api-access-6r5hf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bmhgb\" (UID: \"0040ed18-716a-4452-8209-c45c497d7fae\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" Jan 30 16:14:04 crc kubenswrapper[4740]: W0130 16:14:04.269003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ffa4d95_fc8d_4352_9bb3_b74038d53453.slice/crio-01cd401c010e0a9d7e25ffa499477dd0ee30e14397805d096c15290c338b1b4a WatchSource:0}: Error finding container 01cd401c010e0a9d7e25ffa499477dd0ee30e14397805d096c15290c338b1b4a: Status 404 returned error can't find the container with id 01cd401c010e0a9d7e25ffa499477dd0ee30e14397805d096c15290c338b1b4a Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.302767 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2"] Jan 30 16:14:04 crc kubenswrapper[4740]: W0130 16:14:04.358688 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3f3f690_263c_406b_9651_b1d548a73010.slice/crio-8dfb6cd6a8e2b2feb334ac962189fd4cc4053d8b27555c80b24944cb7233e524 WatchSource:0}: Error finding container 8dfb6cd6a8e2b2feb334ac962189fd4cc4053d8b27555c80b24944cb7233e524: Status 404 returned error can't find the container with id 8dfb6cd6a8e2b2feb334ac962189fd4cc4053d8b27555c80b24944cb7233e524 Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.419126 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.471486 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d"] Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.475335 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" event={"ID":"4ffa4d95-fc8d-4352-9bb3-b74038d53453","Type":"ContainerStarted","Data":"01cd401c010e0a9d7e25ffa499477dd0ee30e14397805d096c15290c338b1b4a"} Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.476317 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" event={"ID":"b3f3f690-263c-406b-9651-b1d548a73010","Type":"ContainerStarted","Data":"8dfb6cd6a8e2b2feb334ac962189fd4cc4053d8b27555c80b24944cb7233e524"} Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.530877 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm"] Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.532795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.533701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.533048 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.533998 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:05.533971416 +0000 UTC m=+1094.171034015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "metrics-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.533796 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.534664 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:05.534639512 +0000 UTC m=+1094.171702201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: W0130 16:14:04.544016 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a1d5aff_da4c_4c0e_9616_44da3511eef2.slice/crio-8bbef51e6229e43ebe1ff718353f3d0c46f12ea96166aae2b4486eb055d55e7d WatchSource:0}: Error finding container 8bbef51e6229e43ebe1ff718353f3d0c46f12ea96166aae2b4486eb055d55e7d: Status 404 returned error can't find the container with id 8bbef51e6229e43ebe1ff718353f3d0c46f12ea96166aae2b4486eb055d55e7d Jan 30 16:14:04 crc kubenswrapper[4740]: W0130 16:14:04.559206 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fa5493f_2e76_4fda_9a43_4d8e7828f2a7.slice/crio-b1b2556647275aa32ba99cc36be8cc1c1f9b67f17909e8ea4a915470474d1119 WatchSource:0}: Error finding container b1b2556647275aa32ba99cc36be8cc1c1f9b67f17909e8ea4a915470474d1119: Status 404 returned error can't find the container with id b1b2556647275aa32ba99cc36be8cc1c1f9b67f17909e8ea4a915470474d1119 Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.625606 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65"] Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.737718 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.737995 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: E0130 16:14:04.738057 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert podName:736c30f6-a1e4-47aa-a6d0-713baf99ad69 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:06.738035615 +0000 UTC m=+1095.375098214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert") pod "infra-operator-controller-manager-79955696d6-6wz9h" (UID: "736c30f6-a1e4-47aa-a6d0-713baf99ad69") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.915857 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2"] Jan 30 16:14:04 crc kubenswrapper[4740]: I0130 16:14:04.942871 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.107933 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv"] Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.108787 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac86533b_0c5a_4704_b497_6e7e1114d938.slice/crio-6eb1981c1862025eda581fb8d59b63ea4c8a1471d6e593da4896412010648b02 WatchSource:0}: Error finding container 6eb1981c1862025eda581fb8d59b63ea4c8a1471d6e593da4896412010648b02: Status 404 returned error can't find the container with id 6eb1981c1862025eda581fb8d59b63ea4c8a1471d6e593da4896412010648b02 Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.143398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.143832 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.143895 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert podName:107dde7f-ab99-4981-ba7a-0c6756408b54 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:07.143877478 +0000 UTC m=+1095.780940067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" (UID: "107dde7f-ab99-4981-ba7a-0c6756408b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.197339 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.208618 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.492069 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" event={"ID":"7a1d5aff-da4c-4c0e-9616-44da3511eef2","Type":"ContainerStarted","Data":"8bbef51e6229e43ebe1ff718353f3d0c46f12ea96166aae2b4486eb055d55e7d"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.493022 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" event={"ID":"b9648635-827e-4a21-8890-ba8b1772d7c4","Type":"ContainerStarted","Data":"8f23353b720e39500f90351a710b4f1032796639a2dfbf90a7b7bc320b3adad0"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.499469 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" event={"ID":"a7ff8a9d-40f9-4354-aa10-e7e93907a0a5","Type":"ContainerStarted","Data":"90f87808013f755be71e3ad022a6f0b5cf49c8244aad3d699d611673e4d49842"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.502182 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" event={"ID":"9fa5493f-2e76-4fda-9a43-4d8e7828f2a7","Type":"ContainerStarted","Data":"b1b2556647275aa32ba99cc36be8cc1c1f9b67f17909e8ea4a915470474d1119"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.509475 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" event={"ID":"ac86533b-0c5a-4704-b497-6e7e1114d938","Type":"ContainerStarted","Data":"6eb1981c1862025eda581fb8d59b63ea4c8a1471d6e593da4896412010648b02"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.522185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" event={"ID":"de27448d-0b23-4bbb-81b2-7818361e53bf","Type":"ContainerStarted","Data":"eea4b88709f3e8d6d9d947c33491fa466f50fb30c0366f74ad7113d37dfc76e4"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.522581 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.524568 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" event={"ID":"97e430a6-ad51-4e80-999e-75e568b1d6b6","Type":"ContainerStarted","Data":"1c5b3377b9462f5cc35047148ada08c1e4fa44b039ea7018c2c2be2a0f779887"} Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.530216 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" event={"ID":"6ba6b433-534d-4a14-9fbb-4418b1c39fd9","Type":"ContainerStarted","Data":"b0859e51fdfe38a515a5b6f82a2688734171fd11ed8d02ae984775d3385ea0cc"} Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.530821 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011e9da6_1efe_4002_91f3_0aa0923fa015.slice/crio-b13e31bba3e1d01a8741a18334ca755cc27ab06c958c8bd41e1c558f4b176b82 WatchSource:0}: Error finding container b13e31bba3e1d01a8741a18334ca755cc27ab06c958c8bd41e1c558f4b176b82: Status 404 returned error can't find the container with id b13e31bba3e1d01a8741a18334ca755cc27ab06c958c8bd41e1c558f4b176b82 Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.532223 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-6sl24"] Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.533312 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ebfaaf_00f6_430e_bcb2_b5041395a101.slice/crio-9c9f6c61fb03f9b1b7120d7376c49a4fb3fd3211041fc19d8fd0255111bb1b0f WatchSource:0}: Error finding container 9c9f6c61fb03f9b1b7120d7376c49a4fb3fd3211041fc19d8fd0255111bb1b0f: Status 404 returned error can't find the container with id 9c9f6c61fb03f9b1b7120d7376c49a4fb3fd3211041fc19d8fd0255111bb1b0f Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.537641 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82bfd4e_e72e_4941_b8aa_1baae2433217.slice/crio-a00a5936995be72ee807beb46e6cfe92735d23ec1344d0149b8054ce96d576ec WatchSource:0}: Error finding container a00a5936995be72ee807beb46e6cfe92735d23ec1344d0149b8054ce96d576ec: Status 404 returned error can't find the container with id a00a5936995be72ee807beb46e6cfe92735d23ec1344d0149b8054ce96d576ec Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.551537 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-v8885"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.582567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.582628 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.584569 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.584689 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm"] Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.584795 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.584965 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:07.584937527 +0000 UTC m=+1096.222000126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "webhook-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.585257 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:07.585239104 +0000 UTC m=+1096.222301713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "metrics-server-cert" not found Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.592922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n"] Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.601886 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8a01322_677f_443a_83fd_6352c7523727.slice/crio-c875a052c32c16be76dcdabc506afa2e064c1cff05b8122d5a50728add62fc82 WatchSource:0}: Error finding container c875a052c32c16be76dcdabc506afa2e064c1cff05b8122d5a50728add62fc82: Status 404 returned error can't find the container with id c875a052c32c16be76dcdabc506afa2e064c1cff05b8122d5a50728add62fc82 Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.602056 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t"] Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.602134 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82688ddf_9d92_4ff1_873b_ca5766766189.slice/crio-bb3551c2b0f2a1b12a2cda034a427d49a9c188626ac686332c351761b6f3094d WatchSource:0}: Error finding container bb3551c2b0f2a1b12a2cda034a427d49a9c188626ac686332c351761b6f3094d: Status 404 returned error can't find the container with id bb3551c2b0f2a1b12a2cda034a427d49a9c188626ac686332c351761b6f3094d Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.602746 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82b9c083_1154_46de_958e_6a7726aca988.slice/crio-18ee965d38eaea270ea14b6e9be4bfba82c29989c87d61118408d79124bb7ac6 WatchSource:0}: Error finding container 18ee965d38eaea270ea14b6e9be4bfba82c29989c87d61118408d79124bb7ac6: Status 404 returned error can't find the container with id 18ee965d38eaea270ea14b6e9be4bfba82c29989c87d61118408d79124bb7ac6 Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.617065 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5"] Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.621617 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7c8zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-lqf5n_openstack-operators(c35e116f-97e5-47ec-aa40-955321cb09d5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.622792 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podUID="c35e116f-97e5-47ec-aa40-955321cb09d5" Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.625295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv"] Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.625883 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5w5fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-d4hf5_openstack-operators(88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.627509 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" podUID="88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c" Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.630993 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55"] Jan 30 16:14:05 crc kubenswrapper[4740]: I0130 16:14:05.644636 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb"] Jan 30 16:14:05 crc kubenswrapper[4740]: W0130 16:14:05.648225 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a6574e0_d6db_4e3d_9203_c3b28694e68f.slice/crio-3d8954d79188cdba1568674ecb095fe94d40758d61cf5c46dd1b5086c2d9ee32 WatchSource:0}: Error finding container 3d8954d79188cdba1568674ecb095fe94d40758d61cf5c46dd1b5086c2d9ee32: Status 404 returned error can't find the container with id 3d8954d79188cdba1568674ecb095fe94d40758d61cf5c46dd1b5086c2d9ee32 Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.651081 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6r5hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bmhgb_openstack-operators(0040ed18-716a-4452-8209-c45c497d7fae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.652442 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podUID="0040ed18-716a-4452-8209-c45c497d7fae" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.680103 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bljkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-dpf55_openstack-operators(5a6574e0-d6db-4e3d-9203-c3b28694e68f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 16:14:05 crc kubenswrapper[4740]: E0130 16:14:05.681431 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" podUID="5a6574e0-d6db-4e3d-9203-c3b28694e68f" Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.567382 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" event={"ID":"88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c","Type":"ContainerStarted","Data":"8b9bbe558768668725146e52d11cd0a512bd068cf62682ac91942ca897a03dab"} Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.575609 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" podUID="88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c" Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.608885 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" event={"ID":"0040ed18-716a-4452-8209-c45c497d7fae","Type":"ContainerStarted","Data":"6c1e768500a0ffb2f6e2d89816e07a63175b0df9446412c72772ce86a0e7e7a3"} Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.615485 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podUID="0040ed18-716a-4452-8209-c45c497d7fae" Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.615610 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" event={"ID":"5a6574e0-d6db-4e3d-9203-c3b28694e68f","Type":"ContainerStarted","Data":"3d8954d79188cdba1568674ecb095fe94d40758d61cf5c46dd1b5086c2d9ee32"} Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.617666 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" podUID="5a6574e0-d6db-4e3d-9203-c3b28694e68f" Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.618183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" event={"ID":"b8a01322-677f-443a-83fd-6352c7523727","Type":"ContainerStarted","Data":"c875a052c32c16be76dcdabc506afa2e064c1cff05b8122d5a50728add62fc82"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.620337 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" event={"ID":"011e9da6-1efe-4002-91f3-0aa0923fa015","Type":"ContainerStarted","Data":"b13e31bba3e1d01a8741a18334ca755cc27ab06c958c8bd41e1c558f4b176b82"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.623171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" event={"ID":"c35e116f-97e5-47ec-aa40-955321cb09d5","Type":"ContainerStarted","Data":"4a6e1d1fb0a7fcc35bc2f03b53f61035263d6a958e3f98135c9dd7620458fcf6"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.632548 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" event={"ID":"d6ebfaaf-00f6-430e-bcb2-b5041395a101","Type":"ContainerStarted","Data":"9c9f6c61fb03f9b1b7120d7376c49a4fb3fd3211041fc19d8fd0255111bb1b0f"} Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.632630 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podUID="c35e116f-97e5-47ec-aa40-955321cb09d5" Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.643058 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" event={"ID":"82688ddf-9d92-4ff1-873b-ca5766766189","Type":"ContainerStarted","Data":"bb3551c2b0f2a1b12a2cda034a427d49a9c188626ac686332c351761b6f3094d"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.647067 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" event={"ID":"b82bfd4e-e72e-4941-b8aa-1baae2433217","Type":"ContainerStarted","Data":"a00a5936995be72ee807beb46e6cfe92735d23ec1344d0149b8054ce96d576ec"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.666303 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" event={"ID":"82b9c083-1154-46de-958e-6a7726aca988","Type":"ContainerStarted","Data":"18ee965d38eaea270ea14b6e9be4bfba82c29989c87d61118408d79124bb7ac6"} Jan 30 16:14:06 crc kubenswrapper[4740]: I0130 16:14:06.814999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.815843 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:06 crc kubenswrapper[4740]: E0130 16:14:06.815920 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert podName:736c30f6-a1e4-47aa-a6d0-713baf99ad69 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:10.815894859 +0000 UTC m=+1099.452957458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert") pod "infra-operator-controller-manager-79955696d6-6wz9h" (UID: "736c30f6-a1e4-47aa-a6d0-713baf99ad69") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: I0130 16:14:07.223264 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.223615 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.223908 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert podName:107dde7f-ab99-4981-ba7a-0c6756408b54 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:11.223860664 +0000 UTC m=+1099.860923263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" (UID: "107dde7f-ab99-4981-ba7a-0c6756408b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: I0130 16:14:07.634366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:07 crc kubenswrapper[4740]: I0130 16:14:07.634433 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.634582 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.634671 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.634793 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:11.634774602 +0000 UTC m=+1100.271837201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "webhook-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.635202 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:11.635175992 +0000 UTC m=+1100.272238591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "metrics-server-cert" not found Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.696655 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" podUID="88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c" Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.697622 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podUID="c35e116f-97e5-47ec-aa40-955321cb09d5" Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.697633 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" podUID="5a6574e0-d6db-4e3d-9203-c3b28694e68f" Jan 30 16:14:07 crc kubenswrapper[4740]: E0130 16:14:07.697651 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podUID="0040ed18-716a-4452-8209-c45c497d7fae" Jan 30 16:14:10 crc kubenswrapper[4740]: I0130 16:14:10.911452 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:10 crc kubenswrapper[4740]: E0130 16:14:10.911673 4740 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:10 crc kubenswrapper[4740]: E0130 16:14:10.912042 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert podName:736c30f6-a1e4-47aa-a6d0-713baf99ad69 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:18.912021342 +0000 UTC m=+1107.549083941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert") pod "infra-operator-controller-manager-79955696d6-6wz9h" (UID: "736c30f6-a1e4-47aa-a6d0-713baf99ad69") : secret "infra-operator-webhook-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: I0130 16:14:11.318351 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.318672 4740 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.318786 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert podName:107dde7f-ab99-4981-ba7a-0c6756408b54 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:19.318760506 +0000 UTC m=+1107.955823105 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" (UID: "107dde7f-ab99-4981-ba7a-0c6756408b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: I0130 16:14:11.724957 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.725139 4740 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: I0130 16:14:11.725536 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.725607 4740 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.725614 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:19.725587903 +0000 UTC m=+1108.362650512 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "metrics-server-cert" not found Jan 30 16:14:11 crc kubenswrapper[4740]: E0130 16:14:11.725644 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs podName:4b1298c0-d749-42f3-97c1-ad1b19db8f96 nodeName:}" failed. No retries permitted until 2026-01-30 16:14:19.725631034 +0000 UTC m=+1108.362693633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs") pod "openstack-operator-controller-manager-6bdc979b86-rndp8" (UID: "4b1298c0-d749-42f3-97c1-ad1b19db8f96") : secret "webhook-server-cert" not found Jan 30 16:14:17 crc kubenswrapper[4740]: E0130 16:14:17.352327 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 30 16:14:17 crc kubenswrapper[4740]: E0130 16:14:17.352963 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jh9bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-2cj65_openstack-operators(de27448d-0b23-4bbb-81b2-7818361e53bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:17 crc kubenswrapper[4740]: E0130 16:14:17.354233 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" podUID="de27448d-0b23-4bbb-81b2-7818361e53bf" Jan 30 16:14:17 crc kubenswrapper[4740]: E0130 16:14:17.788619 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" podUID="de27448d-0b23-4bbb-81b2-7818361e53bf" Jan 30 16:14:18 crc kubenswrapper[4740]: I0130 16:14:18.988564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:18 crc kubenswrapper[4740]: I0130 16:14:18.996302 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/736c30f6-a1e4-47aa-a6d0-713baf99ad69-cert\") pod \"infra-operator-controller-manager-79955696d6-6wz9h\" (UID: \"736c30f6-a1e4-47aa-a6d0-713baf99ad69\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:19 crc kubenswrapper[4740]: E0130 16:14:19.137778 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 30 16:14:19 crc kubenswrapper[4740]: E0130 16:14:19.138037 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6bdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-6sl24_openstack-operators(d6ebfaaf-00f6-430e-bcb2-b5041395a101): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:19 crc kubenswrapper[4740]: E0130 16:14:19.139723 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" podUID="d6ebfaaf-00f6-430e-bcb2-b5041395a101" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.172980 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.395107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.400944 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/107dde7f-ab99-4981-ba7a-0c6756408b54-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg\" (UID: \"107dde7f-ab99-4981-ba7a-0c6756408b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.564341 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:19 crc kubenswrapper[4740]: E0130 16:14:19.804693 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" podUID="d6ebfaaf-00f6-430e-bcb2-b5041395a101" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.805051 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.805111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.809719 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-metrics-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:19 crc kubenswrapper[4740]: I0130 16:14:19.813481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4b1298c0-d749-42f3-97c1-ad1b19db8f96-webhook-certs\") pod \"openstack-operator-controller-manager-6bdc979b86-rndp8\" (UID: \"4b1298c0-d749-42f3-97c1-ad1b19db8f96\") " pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:20 crc kubenswrapper[4740]: I0130 16:14:20.066605 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.129107 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.129540 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gwffk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-nwnsv_openstack-operators(ac86533b-0c5a-4704-b497-6e7e1114d938): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.130776 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" podUID="ac86533b-0c5a-4704-b497-6e7e1114d938" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.818274 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" podUID="ac86533b-0c5a-4704-b497-6e7e1114d938" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.875755 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.876182 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tn4pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-xsqtm_openstack-operators(4ffa4d95-fc8d-4352-9bb3-b74038d53453): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:20 crc kubenswrapper[4740]: E0130 16:14:20.877476 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" podUID="4ffa4d95-fc8d-4352-9bb3-b74038d53453" Jan 30 16:14:21 crc kubenswrapper[4740]: E0130 16:14:21.535040 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 16:14:21 crc kubenswrapper[4740]: E0130 16:14:21.544158 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjkq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-gtt5t_openstack-operators(82688ddf-9d92-4ff1-873b-ca5766766189): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:21 crc kubenswrapper[4740]: E0130 16:14:21.548724 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" podUID="82688ddf-9d92-4ff1-873b-ca5766766189" Jan 30 16:14:21 crc kubenswrapper[4740]: E0130 16:14:21.827307 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" podUID="82688ddf-9d92-4ff1-873b-ca5766766189" Jan 30 16:14:21 crc kubenswrapper[4740]: E0130 16:14:21.827558 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" podUID="4ffa4d95-fc8d-4352-9bb3-b74038d53453" Jan 30 16:14:33 crc kubenswrapper[4740]: E0130 16:14:33.946594 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 16:14:33 crc kubenswrapper[4740]: E0130 16:14:33.947862 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8dw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-bzjc5_openstack-operators(6ba6b433-534d-4a14-9fbb-4418b1c39fd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:33 crc kubenswrapper[4740]: E0130 16:14:33.949128 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" podUID="6ba6b433-534d-4a14-9fbb-4418b1c39fd9" Jan 30 16:14:34 crc kubenswrapper[4740]: E0130 16:14:34.948795 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" podUID="6ba6b433-534d-4a14-9fbb-4418b1c39fd9" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.880114 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.881009 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7c8zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-lqf5n_openstack-operators(c35e116f-97e5-47ec-aa40-955321cb09d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.882374 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podUID="c35e116f-97e5-47ec-aa40-955321cb09d5" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.993189 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.993499 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6r5hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bmhgb_openstack-operators(0040ed18-716a-4452-8209-c45c497d7fae): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Jan 30 16:14:35 crc kubenswrapper[4740]: E0130 16:14:35.994750 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podUID="0040ed18-716a-4452-8209-c45c497d7fae" Jan 30 16:14:36 crc kubenswrapper[4740]: I0130 16:14:36.908867 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg"] Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.030000 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h"] Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.036128 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8"] Jan 30 16:14:37 crc kubenswrapper[4740]: W0130 16:14:37.543139 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod107dde7f_ab99_4981_ba7a_0c6756408b54.slice/crio-dd8dc9b256eb75ae87dfd7050f4331f422e0fe40d5ed28fccd4a68bdb4b0e36f WatchSource:0}: Error finding container dd8dc9b256eb75ae87dfd7050f4331f422e0fe40d5ed28fccd4a68bdb4b0e36f: Status 404 returned error can't find the container with id dd8dc9b256eb75ae87dfd7050f4331f422e0fe40d5ed28fccd4a68bdb4b0e36f Jan 30 16:14:37 crc kubenswrapper[4740]: W0130 16:14:37.543610 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod736c30f6_a1e4_47aa_a6d0_713baf99ad69.slice/crio-728a8002cf63221b48097e23a2c6161096f171f42cfa0838dce86662cb5e33e1 WatchSource:0}: Error finding container 728a8002cf63221b48097e23a2c6161096f171f42cfa0838dce86662cb5e33e1: Status 404 returned error can't find the container with id 728a8002cf63221b48097e23a2c6161096f171f42cfa0838dce86662cb5e33e1 Jan 30 16:14:37 crc kubenswrapper[4740]: W0130 16:14:37.547152 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b1298c0_d749_42f3_97c1_ad1b19db8f96.slice/crio-272194d68df1446f1e7ddd4d83a94a06bf60f106ba1070c2e8aa9b75fad8fa31 WatchSource:0}: Error finding container 272194d68df1446f1e7ddd4d83a94a06bf60f106ba1070c2e8aa9b75fad8fa31: Status 404 returned error can't find the container with id 272194d68df1446f1e7ddd4d83a94a06bf60f106ba1070c2e8aa9b75fad8fa31 Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.977908 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" event={"ID":"736c30f6-a1e4-47aa-a6d0-713baf99ad69","Type":"ContainerStarted","Data":"728a8002cf63221b48097e23a2c6161096f171f42cfa0838dce86662cb5e33e1"} Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.985580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" event={"ID":"97e430a6-ad51-4e80-999e-75e568b1d6b6","Type":"ContainerStarted","Data":"07d964a59b86939a8be3c60628a3a9992e5db878fef420edccfa65d5e57d39ef"} Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.985931 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.991207 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" event={"ID":"4b1298c0-d749-42f3-97c1-ad1b19db8f96","Type":"ContainerStarted","Data":"272194d68df1446f1e7ddd4d83a94a06bf60f106ba1070c2e8aa9b75fad8fa31"} Jan 30 16:14:37 crc kubenswrapper[4740]: I0130 16:14:37.994703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" event={"ID":"107dde7f-ab99-4981-ba7a-0c6756408b54","Type":"ContainerStarted","Data":"dd8dc9b256eb75ae87dfd7050f4331f422e0fe40d5ed28fccd4a68bdb4b0e36f"} Jan 30 16:14:38 crc kubenswrapper[4740]: I0130 16:14:38.007735 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" podStartSLOduration=5.268194078 podStartE2EDuration="36.007709275s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.936498356 +0000 UTC m=+1093.573560955" lastFinishedPulling="2026-01-30 16:14:35.676013543 +0000 UTC m=+1124.313076152" observedRunningTime="2026-01-30 16:14:38.004327031 +0000 UTC m=+1126.641389630" watchObservedRunningTime="2026-01-30 16:14:38.007709275 +0000 UTC m=+1126.644771874" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.093055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" event={"ID":"a7ff8a9d-40f9-4354-aa10-e7e93907a0a5","Type":"ContainerStarted","Data":"7b78e1cc738c8cab92f9ab1153c913eef65b519f2e40518b589f50d173728f4a"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.094386 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.124869 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" event={"ID":"de27448d-0b23-4bbb-81b2-7818361e53bf","Type":"ContainerStarted","Data":"b441c89ce8ca8f60adbdb4283b7c68f69eadd1a0e0d674d82525cd86ad5c4ea4"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.125770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.142603 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" event={"ID":"4ffa4d95-fc8d-4352-9bb3-b74038d53453","Type":"ContainerStarted","Data":"90cca6432eee45488149f50530a5064d958554774028292b81047700ed3459ae"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.143416 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.169323 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" event={"ID":"88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c","Type":"ContainerStarted","Data":"7e8686d296e08de45c26fd3f79deb94fb4d8174d9f2006768557198b0e14baec"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.170224 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.183839 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" event={"ID":"d6ebfaaf-00f6-430e-bcb2-b5041395a101","Type":"ContainerStarted","Data":"4f0dad8840b2e7a1a5f611d15148fee529cc29a10928f3c83b86b4bfb1e33cb5"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.184201 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.199611 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" podStartSLOduration=6.7733543019999995 podStartE2EDuration="37.19958318s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.211825189 +0000 UTC m=+1093.848887788" lastFinishedPulling="2026-01-30 16:14:35.638054027 +0000 UTC m=+1124.275116666" observedRunningTime="2026-01-30 16:14:39.196944554 +0000 UTC m=+1127.834007153" watchObservedRunningTime="2026-01-30 16:14:39.19958318 +0000 UTC m=+1127.836645779" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.207718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" event={"ID":"b9648635-827e-4a21-8890-ba8b1772d7c4","Type":"ContainerStarted","Data":"7c9d0b3640e62e96f77910c56614d3d2903c4841fe44559c27c9685838186f77"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.208562 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.237011 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" podStartSLOduration=5.16153045 podStartE2EDuration="37.236983301s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.624751928 +0000 UTC m=+1094.261814527" lastFinishedPulling="2026-01-30 16:14:37.700204759 +0000 UTC m=+1126.337267378" observedRunningTime="2026-01-30 16:14:39.220670555 +0000 UTC m=+1127.857733154" watchObservedRunningTime="2026-01-30 16:14:39.236983301 +0000 UTC m=+1127.874045900" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.238923 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" event={"ID":"82688ddf-9d92-4ff1-873b-ca5766766189","Type":"ContainerStarted","Data":"434d888ef7979313366720eef81a8c7caa3a314a88e28ef26939cb7b623fb7f8"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.239790 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.277694 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" event={"ID":"011e9da6-1efe-4002-91f3-0aa0923fa015","Type":"ContainerStarted","Data":"067bfd79ecf12edbe3336c3fe83d9fbf8d5bbdce7878778294514e3e4924078c"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.278553 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.282999 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" podStartSLOduration=3.777148379 podStartE2EDuration="37.282969536s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.293984432 +0000 UTC m=+1092.931047031" lastFinishedPulling="2026-01-30 16:14:37.799805589 +0000 UTC m=+1126.436868188" observedRunningTime="2026-01-30 16:14:39.278820973 +0000 UTC m=+1127.915883572" watchObservedRunningTime="2026-01-30 16:14:39.282969536 +0000 UTC m=+1127.920032135" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.306792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" event={"ID":"b82bfd4e-e72e-4941-b8aa-1baae2433217","Type":"ContainerStarted","Data":"fa859ca437dc97b130daaef3dfde3e508b2d5a2408e62603f51ec833678bb9af"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.308103 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.318273 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" event={"ID":"82b9c083-1154-46de-958e-6a7726aca988","Type":"ContainerStarted","Data":"fc4cc1eec0354f0a3fc97490cac742d3dc9c002972d5d0200a4e100e1934be7f"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.319541 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.355683 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" podStartSLOduration=4.26686659 podStartE2EDuration="37.355655616s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.66670934 +0000 UTC m=+1093.303771939" lastFinishedPulling="2026-01-30 16:14:37.755498366 +0000 UTC m=+1126.392560965" observedRunningTime="2026-01-30 16:14:39.331293019 +0000 UTC m=+1127.968355618" watchObservedRunningTime="2026-01-30 16:14:39.355655616 +0000 UTC m=+1127.992718215" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.358160 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.358195 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" event={"ID":"ac86533b-0c5a-4704-b497-6e7e1114d938","Type":"ContainerStarted","Data":"777c27d01eeea985ba3a4c36623cff4ee016f9697ce001762bd9254b91f5f82a"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.358213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" event={"ID":"b8a01322-677f-443a-83fd-6352c7523727","Type":"ContainerStarted","Data":"f0fdb2d07061696ab86fc82e33a44f9ff38ecb1ff9d75089acc6ddb644c70632"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.358903 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.377042 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" podStartSLOduration=4.159700086 podStartE2EDuration="36.377012498s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.538162973 +0000 UTC m=+1094.175225572" lastFinishedPulling="2026-01-30 16:14:37.755475395 +0000 UTC m=+1126.392537984" observedRunningTime="2026-01-30 16:14:39.368028564 +0000 UTC m=+1128.005091163" watchObservedRunningTime="2026-01-30 16:14:39.377012498 +0000 UTC m=+1128.014075097" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.377737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" event={"ID":"5a6574e0-d6db-4e3d-9203-c3b28694e68f","Type":"ContainerStarted","Data":"8babbe28b31af594409f3517f200fbce274721b75b16cd619c5d19dd1954b570"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.378633 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.418479 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" podStartSLOduration=6.280207259 podStartE2EDuration="36.418451699s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.533881986 +0000 UTC m=+1094.170944585" lastFinishedPulling="2026-01-30 16:14:35.672126416 +0000 UTC m=+1124.309189025" observedRunningTime="2026-01-30 16:14:39.415607329 +0000 UTC m=+1128.052669928" watchObservedRunningTime="2026-01-30 16:14:39.418451699 +0000 UTC m=+1128.055514298" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.434268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" event={"ID":"4b1298c0-d749-42f3-97c1-ad1b19db8f96","Type":"ContainerStarted","Data":"dcf1667aec3b6e68aadef2c3c44e5f146cb879301e02a41c26bd22ae72cb6001"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.434646 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.460743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" event={"ID":"b3f3f690-263c-406b-9651-b1d548a73010","Type":"ContainerStarted","Data":"bf790a30e99114c02120f416264c7228f3b33e7c7afab01615567e928e319bff"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.461301 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.483133 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" event={"ID":"9fa5493f-2e76-4fda-9a43-4d8e7828f2a7","Type":"ContainerStarted","Data":"459ef2d2ace487435ddc6368aba95c1f97e6460d10d5d79c75838ff2d9cb58ba"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.484179 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.508757 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" podStartSLOduration=6.453704578 podStartE2EDuration="36.508732717s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.621144508 +0000 UTC m=+1094.258207107" lastFinishedPulling="2026-01-30 16:14:35.676172647 +0000 UTC m=+1124.313235246" observedRunningTime="2026-01-30 16:14:39.460732512 +0000 UTC m=+1128.097795111" watchObservedRunningTime="2026-01-30 16:14:39.508732717 +0000 UTC m=+1128.145795306" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.511455 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" podStartSLOduration=6.769983979 podStartE2EDuration="37.511446585s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.935526511 +0000 UTC m=+1093.572589110" lastFinishedPulling="2026-01-30 16:14:35.676989107 +0000 UTC m=+1124.314051716" observedRunningTime="2026-01-30 16:14:39.505209899 +0000 UTC m=+1128.142272518" watchObservedRunningTime="2026-01-30 16:14:39.511446585 +0000 UTC m=+1128.148509184" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.515414 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" event={"ID":"7a1d5aff-da4c-4c0e-9616-44da3511eef2","Type":"ContainerStarted","Data":"d2f72986d16cb9151f50d48988776069f2e74ad1e549e7fa32d265fa3cd4bd32"} Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.515565 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.544823 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" podStartSLOduration=6.482024393 podStartE2EDuration="36.544805285s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.614273217 +0000 UTC m=+1094.251335816" lastFinishedPulling="2026-01-30 16:14:35.677054109 +0000 UTC m=+1124.314116708" observedRunningTime="2026-01-30 16:14:39.538503698 +0000 UTC m=+1128.175566297" watchObservedRunningTime="2026-01-30 16:14:39.544805285 +0000 UTC m=+1128.181867884" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.579212 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" podStartSLOduration=4.542835244 podStartE2EDuration="36.579188851s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.620841451 +0000 UTC m=+1094.257904050" lastFinishedPulling="2026-01-30 16:14:37.657195058 +0000 UTC m=+1126.294257657" observedRunningTime="2026-01-30 16:14:39.572500655 +0000 UTC m=+1128.209563254" watchObservedRunningTime="2026-01-30 16:14:39.579188851 +0000 UTC m=+1128.216251450" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.617655 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" podStartSLOduration=4.973164634 podStartE2EDuration="37.617630539s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.111129303 +0000 UTC m=+1093.748191902" lastFinishedPulling="2026-01-30 16:14:37.755595178 +0000 UTC m=+1126.392657807" observedRunningTime="2026-01-30 16:14:39.615718731 +0000 UTC m=+1128.252781330" watchObservedRunningTime="2026-01-30 16:14:39.617630539 +0000 UTC m=+1128.254693148" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.708953 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" podStartSLOduration=7.637095824 podStartE2EDuration="37.70892346s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.577176094 +0000 UTC m=+1094.214238693" lastFinishedPulling="2026-01-30 16:14:35.64900369 +0000 UTC m=+1124.286066329" observedRunningTime="2026-01-30 16:14:39.65184882 +0000 UTC m=+1128.288911439" watchObservedRunningTime="2026-01-30 16:14:39.70892346 +0000 UTC m=+1128.345986059" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.711651 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" podStartSLOduration=6.601177148 podStartE2EDuration="37.711632358s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.549887232 +0000 UTC m=+1093.186949831" lastFinishedPulling="2026-01-30 16:14:35.660342392 +0000 UTC m=+1124.297405041" observedRunningTime="2026-01-30 16:14:39.691064427 +0000 UTC m=+1128.328127026" watchObservedRunningTime="2026-01-30 16:14:39.711632358 +0000 UTC m=+1128.348694967" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.748240 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" podStartSLOduration=6.444705063 podStartE2EDuration="37.748212239s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.368600669 +0000 UTC m=+1093.005663258" lastFinishedPulling="2026-01-30 16:14:35.672107825 +0000 UTC m=+1124.309170434" observedRunningTime="2026-01-30 16:14:39.729042151 +0000 UTC m=+1128.366104740" watchObservedRunningTime="2026-01-30 16:14:39.748212239 +0000 UTC m=+1128.385274838" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.765968 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" podStartSLOduration=4.743321865 podStartE2EDuration="36.76592524s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.679900011 +0000 UTC m=+1094.316962610" lastFinishedPulling="2026-01-30 16:14:37.702503386 +0000 UTC m=+1126.339565985" observedRunningTime="2026-01-30 16:14:39.759909 +0000 UTC m=+1128.396971599" watchObservedRunningTime="2026-01-30 16:14:39.76592524 +0000 UTC m=+1128.402987839" Jan 30 16:14:39 crc kubenswrapper[4740]: I0130 16:14:39.815691 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" podStartSLOduration=6.708122501 podStartE2EDuration="37.815648638s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:04.564584588 +0000 UTC m=+1093.201647187" lastFinishedPulling="2026-01-30 16:14:35.672110715 +0000 UTC m=+1124.309173324" observedRunningTime="2026-01-30 16:14:39.800215563 +0000 UTC m=+1128.437278162" watchObservedRunningTime="2026-01-30 16:14:39.815648638 +0000 UTC m=+1128.452711237" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.048836 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tzdc2" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.075082 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" podStartSLOduration=40.075060499 podStartE2EDuration="40.075060499s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:14:39.864165866 +0000 UTC m=+1128.501228465" watchObservedRunningTime="2026-01-30 16:14:43.075060499 +0000 UTC m=+1131.712123088" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.080418 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xsqtm" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.083862 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-q652d" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.111621 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2cj65" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.178678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.444524 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-nwnsv" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.472243 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w7jt2" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.531892 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-g8sm9" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.539573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-d4hf5" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.664667 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-v8885" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.932421 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tl627" Jan 30 16:14:43 crc kubenswrapper[4740]: I0130 16:14:43.998599 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-hszqm" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.092121 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gtt5t" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.104686 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-df45f6d5f-lc4fv" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.104755 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-dpf55" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.104884 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-92p8l" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.105079 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-6sl24" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.577119 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" event={"ID":"107dde7f-ab99-4981-ba7a-0c6756408b54","Type":"ContainerStarted","Data":"8cfbf993fcb88579b501e235f69fec9dab9261ec04d19fd1e52fd587aaf2032e"} Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.577552 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.588405 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" event={"ID":"736c30f6-a1e4-47aa-a6d0-713baf99ad69","Type":"ContainerStarted","Data":"3eb182951f4a7fab6eade8622f399006fe4bf1cbc473f6805c065f55287ca70e"} Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.588565 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.616256 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" podStartSLOduration=35.106567507 podStartE2EDuration="41.616224699s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:37.580102669 +0000 UTC m=+1126.217165268" lastFinishedPulling="2026-01-30 16:14:44.089759861 +0000 UTC m=+1132.726822460" observedRunningTime="2026-01-30 16:14:44.614180798 +0000 UTC m=+1133.251243397" watchObservedRunningTime="2026-01-30 16:14:44.616224699 +0000 UTC m=+1133.253287318" Jan 30 16:14:44 crc kubenswrapper[4740]: I0130 16:14:44.635556 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" podStartSLOduration=36.112175016 podStartE2EDuration="42.63553422s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:37.56769987 +0000 UTC m=+1126.204762459" lastFinishedPulling="2026-01-30 16:14:44.091059064 +0000 UTC m=+1132.728121663" observedRunningTime="2026-01-30 16:14:44.633185421 +0000 UTC m=+1133.270248020" watchObservedRunningTime="2026-01-30 16:14:44.63553422 +0000 UTC m=+1133.272596819" Jan 30 16:14:47 crc kubenswrapper[4740]: E0130 16:14:47.337802 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podUID="c35e116f-97e5-47ec-aa40-955321cb09d5" Jan 30 16:14:49 crc kubenswrapper[4740]: I0130 16:14:49.179761 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6wz9h" Jan 30 16:14:49 crc kubenswrapper[4740]: I0130 16:14:49.573878 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg" Jan 30 16:14:50 crc kubenswrapper[4740]: I0130 16:14:50.072697 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bdc979b86-rndp8" Jan 30 16:14:51 crc kubenswrapper[4740]: E0130 16:14:51.338167 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podUID="0040ed18-716a-4452-8209-c45c497d7fae" Jan 30 16:14:51 crc kubenswrapper[4740]: I0130 16:14:51.658900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" event={"ID":"6ba6b433-534d-4a14-9fbb-4418b1c39fd9","Type":"ContainerStarted","Data":"4b29ac3cdde6e332609b184b6b909c2f270822e547eee5a1cead40a0db224a38"} Jan 30 16:14:51 crc kubenswrapper[4740]: I0130 16:14:51.659209 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:14:51 crc kubenswrapper[4740]: I0130 16:14:51.684783 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" podStartSLOduration=4.386498495 podStartE2EDuration="49.684754847s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.21227704 +0000 UTC m=+1093.849339649" lastFinishedPulling="2026-01-30 16:14:50.510533402 +0000 UTC m=+1139.147596001" observedRunningTime="2026-01-30 16:14:51.680235974 +0000 UTC m=+1140.317298573" watchObservedRunningTime="2026-01-30 16:14:51.684754847 +0000 UTC m=+1140.321817446" Jan 30 16:14:54 crc kubenswrapper[4740]: I0130 16:14:54.455276 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:14:54 crc kubenswrapper[4740]: I0130 16:14:54.456130 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.192516 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv"] Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.195568 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.201345 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.205076 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.242101 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv"] Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.357301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.357377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lpsd\" (UniqueName: \"kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.357426 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.458654 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.458727 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lpsd\" (UniqueName: \"kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.458809 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.460783 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.473638 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.482803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lpsd\" (UniqueName: \"kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd\") pod \"collect-profiles-29496495-f6bsv\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:00 crc kubenswrapper[4740]: I0130 16:15:00.540317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:01 crc kubenswrapper[4740]: I0130 16:15:01.004808 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv"] Jan 30 16:15:01 crc kubenswrapper[4740]: I0130 16:15:01.753771 4740 generic.go:334] "Generic (PLEG): container finished" podID="6d795672-b675-4220-a9a1-35a910a77f7b" containerID="9d588e41b1d7245c4f9367491522cd3e5663b3b60d0b11f5f2a9a1429684ff27" exitCode=0 Jan 30 16:15:01 crc kubenswrapper[4740]: I0130 16:15:01.753949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" event={"ID":"6d795672-b675-4220-a9a1-35a910a77f7b","Type":"ContainerDied","Data":"9d588e41b1d7245c4f9367491522cd3e5663b3b60d0b11f5f2a9a1429684ff27"} Jan 30 16:15:01 crc kubenswrapper[4740]: I0130 16:15:01.754202 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" event={"ID":"6d795672-b675-4220-a9a1-35a910a77f7b","Type":"ContainerStarted","Data":"a3b397df3ef083fed210381653bcd0cd17279e6de1ce846133ac2f46f10ac492"} Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.070808 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.216830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpsd\" (UniqueName: \"kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd\") pod \"6d795672-b675-4220-a9a1-35a910a77f7b\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.216895 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume\") pod \"6d795672-b675-4220-a9a1-35a910a77f7b\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.216949 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume\") pod \"6d795672-b675-4220-a9a1-35a910a77f7b\" (UID: \"6d795672-b675-4220-a9a1-35a910a77f7b\") " Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.217958 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d795672-b675-4220-a9a1-35a910a77f7b" (UID: "6d795672-b675-4220-a9a1-35a910a77f7b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.224637 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d795672-b675-4220-a9a1-35a910a77f7b" (UID: "6d795672-b675-4220-a9a1-35a910a77f7b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.224914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd" (OuterVolumeSpecName: "kube-api-access-2lpsd") pod "6d795672-b675-4220-a9a1-35a910a77f7b" (UID: "6d795672-b675-4220-a9a1-35a910a77f7b"). InnerVolumeSpecName "kube-api-access-2lpsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.318321 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d795672-b675-4220-a9a1-35a910a77f7b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.318378 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lpsd\" (UniqueName: \"kubernetes.io/projected/6d795672-b675-4220-a9a1-35a910a77f7b-kube-api-access-2lpsd\") on node \"crc\" DevicePath \"\"" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.318396 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d795672-b675-4220-a9a1-35a910a77f7b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.679395 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-bzjc5" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.778815 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" event={"ID":"6d795672-b675-4220-a9a1-35a910a77f7b","Type":"ContainerDied","Data":"a3b397df3ef083fed210381653bcd0cd17279e6de1ce846133ac2f46f10ac492"} Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.778874 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b397df3ef083fed210381653bcd0cd17279e6de1ce846133ac2f46f10ac492" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.778838 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.780467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" event={"ID":"c35e116f-97e5-47ec-aa40-955321cb09d5","Type":"ContainerStarted","Data":"825fc96892932a9762e1bb7871ce525e13f74e1bc4e1dc2e36ecaad84e72e9a4"} Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.780659 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:15:03 crc kubenswrapper[4740]: I0130 16:15:03.803191 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" podStartSLOduration=4.592308028 podStartE2EDuration="1m1.803165193s" podCreationTimestamp="2026-01-30 16:14:02 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.621464046 +0000 UTC m=+1094.258526645" lastFinishedPulling="2026-01-30 16:15:02.832321221 +0000 UTC m=+1151.469383810" observedRunningTime="2026-01-30 16:15:03.799463051 +0000 UTC m=+1152.436525650" watchObservedRunningTime="2026-01-30 16:15:03.803165193 +0000 UTC m=+1152.440227792" Jan 30 16:15:07 crc kubenswrapper[4740]: I0130 16:15:07.818989 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" event={"ID":"0040ed18-716a-4452-8209-c45c497d7fae","Type":"ContainerStarted","Data":"a62b22576c49e9b9311a1aaaa096eaf3dbcb2dd6de479c714dec890bf47cb132"} Jan 30 16:15:07 crc kubenswrapper[4740]: I0130 16:15:07.838887 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bmhgb" podStartSLOduration=3.045197566 podStartE2EDuration="1m4.838862721s" podCreationTimestamp="2026-01-30 16:14:03 +0000 UTC" firstStartedPulling="2026-01-30 16:14:05.650794436 +0000 UTC m=+1094.287857035" lastFinishedPulling="2026-01-30 16:15:07.444459591 +0000 UTC m=+1156.081522190" observedRunningTime="2026-01-30 16:15:07.837136578 +0000 UTC m=+1156.474199187" watchObservedRunningTime="2026-01-30 16:15:07.838862721 +0000 UTC m=+1156.475925330" Jan 30 16:15:13 crc kubenswrapper[4740]: I0130 16:15:13.802661 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-lqf5n" Jan 30 16:15:24 crc kubenswrapper[4740]: I0130 16:15:24.454704 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:15:24 crc kubenswrapper[4740]: I0130 16:15:24.455642 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.515344 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:15:31 crc kubenswrapper[4740]: E0130 16:15:31.518252 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d795672-b675-4220-a9a1-35a910a77f7b" containerName="collect-profiles" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.518270 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d795672-b675-4220-a9a1-35a910a77f7b" containerName="collect-profiles" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.518438 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d795672-b675-4220-a9a1-35a910a77f7b" containerName="collect-profiles" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.523900 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.526859 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.527559 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.527612 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2spb9" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.527623 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.540181 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.670595 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.670788 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvljq\" (UniqueName: \"kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.675844 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.677327 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.683437 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.698428 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.772097 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs5n5\" (UniqueName: \"kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.772275 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.772559 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvljq\" (UniqueName: \"kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.772662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.772717 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.774120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.795768 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvljq\" (UniqueName: \"kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq\") pod \"dnsmasq-dns-675f4bcbfc-hs28x\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.846104 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.874119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs5n5\" (UniqueName: \"kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.874720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.874827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.875965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.877968 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:31 crc kubenswrapper[4740]: I0130 16:15:31.901215 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs5n5\" (UniqueName: \"kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5\") pod \"dnsmasq-dns-78dd6ddcc-88hc8\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:32 crc kubenswrapper[4740]: I0130 16:15:32.006329 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:15:32 crc kubenswrapper[4740]: I0130 16:15:32.372208 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:15:32 crc kubenswrapper[4740]: I0130 16:15:32.626864 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:15:33 crc kubenswrapper[4740]: I0130 16:15:33.123926 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" event={"ID":"a66a2899-88ba-4309-8671-694d1f29704f","Type":"ContainerStarted","Data":"2a257a99fa8e0fdb94c7533ba88e2498609c52ff80fd24fa60675c3875f48492"} Jan 30 16:15:33 crc kubenswrapper[4740]: I0130 16:15:33.126117 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" event={"ID":"7127e217-781c-4e11-b832-708388dfe45c","Type":"ContainerStarted","Data":"f5734ab1c1f574df7fa8259a05a2d2c9e5c02d81ad449602f536630f97d1f4c8"} Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.255895 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.275580 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.277457 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.288639 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.450201 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.451391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmfh\" (UniqueName: \"kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.451625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.555596 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.555710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.555765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmfh\" (UniqueName: \"kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.556576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.556904 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.604047 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmfh\" (UniqueName: \"kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh\") pod \"dnsmasq-dns-666b6646f7-zs5g8\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.619805 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.641688 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.675334 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.677725 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.699310 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.764199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.764309 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.764383 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjlp\" (UniqueName: \"kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.867751 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.867829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjlp\" (UniqueName: \"kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.867876 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.868988 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.869344 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:34 crc kubenswrapper[4740]: I0130 16:15:34.893917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjlp\" (UniqueName: \"kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp\") pod \"dnsmasq-dns-57d769cc4f-hpm8b\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.101750 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.289799 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:15:35 crc kubenswrapper[4740]: W0130 16:15:35.333277 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa5a3fa_96d2_4ba2_a265_8a1802ba9cdd.slice/crio-7d401b04099ad7bce890804e472c3c68712cfe73d2335d30c18cc6aa32f46428 WatchSource:0}: Error finding container 7d401b04099ad7bce890804e472c3c68712cfe73d2335d30c18cc6aa32f46428: Status 404 returned error can't find the container with id 7d401b04099ad7bce890804e472c3c68712cfe73d2335d30c18cc6aa32f46428 Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.456714 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.458504 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.464441 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.464595 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.464710 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.464851 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.472839 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.473053 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wn5g9" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.473164 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.475558 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.604522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.604968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.604997 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p6zz\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605137 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605187 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.605213 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.708618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.708769 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.708893 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.708963 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709030 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709072 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709095 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709146 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p6zz\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709243 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709302 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.709343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.714391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.714755 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.715568 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.721633 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.722886 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.724018 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.725224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.736460 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.736508 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3333af33d12b2330cb429592689dfb6f04fa8dbabb80e6e509a7e63f9ce6eca/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.738653 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.750185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p6zz\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.761710 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.794850 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " pod="openstack/rabbitmq-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.832808 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.844296 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.845859 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.864215 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.864551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.864670 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.864934 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.865621 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.865802 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.866605 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mfttj" Jan 30 16:15:35 crc kubenswrapper[4740]: I0130 16:15:35.870161 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015498 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxb2h\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015529 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015545 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015562 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015659 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015675 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.015705 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.101103 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117418 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117525 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117605 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117693 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117739 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxb2h\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.117901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118476 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118565 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118611 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.118732 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.119008 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.119117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.120339 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.122381 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.122411 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da441bad2f94d37b9f999b87d703bd642232cb69fb10e554c18e948912ce445b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.138637 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.142425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.143479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxb2h\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.143835 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.148891 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.182652 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.204810 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.227939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" event={"ID":"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd","Type":"ContainerStarted","Data":"7d401b04099ad7bce890804e472c3c68712cfe73d2335d30c18cc6aa32f46428"} Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.229614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" event={"ID":"4f91b4d2-b91b-427d-93a5-473f7d477294","Type":"ContainerStarted","Data":"09bcdd6e16cfd697b687e49919dd77dc74e2c35a7ad8d1fdd4ef0b84e32f5fe0"} Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.667766 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:15:36 crc kubenswrapper[4740]: W0130 16:15:36.692615 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aae2bad_ea00_4d1f_a30f_a8891e15ad05.slice/crio-9ff1c1329888872d037b7035605cc6c8bfa00ec684c8aebfc964a618fd17eb69 WatchSource:0}: Error finding container 9ff1c1329888872d037b7035605cc6c8bfa00ec684c8aebfc964a618fd17eb69: Status 404 returned error can't find the container with id 9ff1c1329888872d037b7035605cc6c8bfa00ec684c8aebfc964a618fd17eb69 Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.859776 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.889438 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.899605 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.908946 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.910380 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-cr2q6" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.911181 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.915666 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.916143 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 16:15:36 crc kubenswrapper[4740]: I0130 16:15:36.931257 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-config-data-default\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043186 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09f1ea51-a4df-41eb-a996-f19303114474-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043241 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043282 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-kolla-config\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043343 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bhm\" (UniqueName: \"kubernetes.io/projected/09f1ea51-a4df-41eb-a996-f19303114474-kube-api-access-76bhm\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.043385 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.158965 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-kolla-config\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159023 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76bhm\" (UniqueName: \"kubernetes.io/projected/09f1ea51-a4df-41eb-a996-f19303114474-kube-api-access-76bhm\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159060 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-config-data-default\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09f1ea51-a4df-41eb-a996-f19303114474-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159235 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159263 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.159292 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.160549 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09f1ea51-a4df-41eb-a996-f19303114474-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.160706 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-kolla-config\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.161968 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-config-data-default\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.162735 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f1ea51-a4df-41eb-a996-f19303114474-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.167457 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.167937 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.167974 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c7b9c249cec8cebe8072e29caea641631841e1e0ed54645bc134f1d7c980c6f9/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.182504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76bhm\" (UniqueName: \"kubernetes.io/projected/09f1ea51-a4df-41eb-a996-f19303114474-kube-api-access-76bhm\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.183247 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f1ea51-a4df-41eb-a996-f19303114474-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.236529 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9bd8709b-4264-46b6-9e77-80eb6a3fbc46\") pod \"openstack-galera-0\" (UID: \"09f1ea51-a4df-41eb-a996-f19303114474\") " pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.246299 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerStarted","Data":"59c0290f1bf07d518868e326b92d32736b009f1ca4821cdbd051736aef7f3d51"} Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.248485 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerStarted","Data":"9ff1c1329888872d037b7035605cc6c8bfa00ec684c8aebfc964a618fd17eb69"} Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.271074 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 16:15:37 crc kubenswrapper[4740]: I0130 16:15:37.976893 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.134619 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.136416 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.148699 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.148813 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.149251 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.151206 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jrj7q" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.171193 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299040 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54941080-7622-4f79-870c-e122b6f526f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54941080-7622-4f79-870c-e122b6f526f1\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2zt\" (UniqueName: \"kubernetes.io/projected/483203e9-89d7-4b67-b0b9-d0bda08469da-kube-api-access-wm2zt\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299730 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299868 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.299900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2zt\" (UniqueName: \"kubernetes.io/projected/483203e9-89d7-4b67-b0b9-d0bda08469da-kube-api-access-wm2zt\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401878 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.401940 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.402029 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-54941080-7622-4f79-870c-e122b6f526f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54941080-7622-4f79-870c-e122b6f526f1\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.402067 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.402662 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.406523 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.407125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.407232 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483203e9-89d7-4b67-b0b9-d0bda08469da-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.409875 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.409918 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-54941080-7622-4f79-870c-e122b6f526f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54941080-7622-4f79-870c-e122b6f526f1\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3d9f9437b916cea1a9c4298a9071b66ddb3655451045344076c5010d5e954fa0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.414206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.414898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483203e9-89d7-4b67-b0b9-d0bda08469da-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.450771 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2zt\" (UniqueName: \"kubernetes.io/projected/483203e9-89d7-4b67-b0b9-d0bda08469da-kube-api-access-wm2zt\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.489698 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.500967 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.501515 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.512018 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qbjjz" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.512283 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.516108 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.580672 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-54941080-7622-4f79-870c-e122b6f526f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-54941080-7622-4f79-870c-e122b6f526f1\") pod \"openstack-cell1-galera-0\" (UID: \"483203e9-89d7-4b67-b0b9-d0bda08469da\") " pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.608879 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmx6\" (UniqueName: \"kubernetes.io/projected/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kube-api-access-7lmx6\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.608994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-config-data\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.609022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.609153 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kolla-config\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.609246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.720578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kolla-config\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.720770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.720877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmx6\" (UniqueName: \"kubernetes.io/projected/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kube-api-access-7lmx6\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.720954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-config-data\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.720991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.721590 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kolla-config\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.722571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-config-data\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.735113 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.750080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.767073 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.776540 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmx6\" (UniqueName: \"kubernetes.io/projected/20cc1f1a-e021-42dd-b435-64eaf9cfa1d7-kube-api-access-7lmx6\") pod \"memcached-0\" (UID: \"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7\") " pod="openstack/memcached-0" Jan 30 16:15:38 crc kubenswrapper[4740]: I0130 16:15:38.867808 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.340131 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.346970 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.350740 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4f69v" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.364762 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.474451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8cc6\" (UniqueName: \"kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6\") pod \"kube-state-metrics-0\" (UID: \"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00\") " pod="openstack/kube-state-metrics-0" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.578126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8cc6\" (UniqueName: \"kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6\") pod \"kube-state-metrics-0\" (UID: \"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00\") " pod="openstack/kube-state-metrics-0" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.643073 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8cc6\" (UniqueName: \"kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6\") pod \"kube-state-metrics-0\" (UID: \"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00\") " pod="openstack/kube-state-metrics-0" Jan 30 16:15:40 crc kubenswrapper[4740]: I0130 16:15:40.751780 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.049965 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.056423 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.064589 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.065101 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.065246 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.065713 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-28lpj" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.065843 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.110727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198211 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h9b\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-kube-api-access-x8h9b\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198279 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198337 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198375 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198416 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.198476 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300304 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300532 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8h9b\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-kube-api-access-x8h9b\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300573 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.300685 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.301398 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.305129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.305686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/47e7ebfc-24f9-4946-aace-c402546d5a60-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.306449 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.306447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/47e7ebfc-24f9-4946-aace-c402546d5a60-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.308942 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.322425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8h9b\" (UniqueName: \"kubernetes.io/projected/47e7ebfc-24f9-4946-aace-c402546d5a60-kube-api-access-x8h9b\") pod \"alertmanager-metric-storage-0\" (UID: \"47e7ebfc-24f9-4946-aace-c402546d5a60\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.420885 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.544091 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.547752 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.561873 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.561944 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.562101 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.562148 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.562363 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-64lqk" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.568400 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.568627 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.568695 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.605910 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713202 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713263 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713362 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713389 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713413 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713440 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713462 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tj92\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.713519 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814761 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814903 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814928 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tj92\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.814990 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.815033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.815102 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.815131 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.815164 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.815900 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.819376 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.821226 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.821748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.825080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.825997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.827962 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.828005 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da6c24608c93f0a2e0624bdeac2ceeb9f7fcc16b1afc060dd8f4d7936492775d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.828626 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.828940 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.850375 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tj92\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.869393 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:41 crc kubenswrapper[4740]: I0130 16:15:41.893254 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.513163 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8vhhm"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.514419 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.516589 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.517617 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gv4lm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.522970 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.531405 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.567795 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7wnqc"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.569714 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.586737 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.586841 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-ovn-controller-tls-certs\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.586952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-combined-ca-bundle\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.587279 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.587340 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsnq\" (UniqueName: \"kubernetes.io/projected/25c16e6c-3931-4064-bf64-baf0759712a5-kube-api-access-tnsnq\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.587415 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-log-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.602687 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c16e6c-3931-4064-bf64-baf0759712a5-scripts\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.612258 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7wnqc"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707407 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-lib\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707482 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c16e6c-3931-4064-bf64-baf0759712a5-scripts\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707509 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-ovn-controller-tls-certs\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707598 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-run\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707638 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-combined-ca-bundle\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707697 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81f43ac7-ed84-4eff-af70-47991eaab066-scripts\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707722 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-etc-ovs\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707755 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-log\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707817 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsnq\" (UniqueName: \"kubernetes.io/projected/25c16e6c-3931-4064-bf64-baf0759712a5-kube-api-access-tnsnq\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-log-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.707908 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zg4q\" (UniqueName: \"kubernetes.io/projected/81f43ac7-ed84-4eff-af70-47991eaab066-kube-api-access-2zg4q\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.708829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.709209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-run\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.709441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/25c16e6c-3931-4064-bf64-baf0759712a5-var-log-ovn\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.710206 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25c16e6c-3931-4064-bf64-baf0759712a5-scripts\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.718494 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-ovn-controller-tls-certs\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.723392 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c16e6c-3931-4064-bf64-baf0759712a5-combined-ca-bundle\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.731836 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsnq\" (UniqueName: \"kubernetes.io/projected/25c16e6c-3931-4064-bf64-baf0759712a5-kube-api-access-tnsnq\") pod \"ovn-controller-8vhhm\" (UID: \"25c16e6c-3931-4064-bf64-baf0759712a5\") " pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-log\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810379 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zg4q\" (UniqueName: \"kubernetes.io/projected/81f43ac7-ed84-4eff-af70-47991eaab066-kube-api-access-2zg4q\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810457 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-lib\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-run\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81f43ac7-ed84-4eff-af70-47991eaab066-scripts\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810563 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-etc-ovs\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810866 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-log\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.810886 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-run\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.811041 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-etc-ovs\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.811051 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/81f43ac7-ed84-4eff-af70-47991eaab066-var-lib\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.813141 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81f43ac7-ed84-4eff-af70-47991eaab066-scripts\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.832105 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.833386 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zg4q\" (UniqueName: \"kubernetes.io/projected/81f43ac7-ed84-4eff-af70-47991eaab066-kube-api-access-2zg4q\") pod \"ovn-controller-ovs-7wnqc\" (UID: \"81f43ac7-ed84-4eff-af70-47991eaab066\") " pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.836476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.842034 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.842339 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.842598 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.842774 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nzrvc" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.843141 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.845685 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.847814 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm" Jan 30 16:15:44 crc kubenswrapper[4740]: I0130 16:15:44.913692 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016464 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016493 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016528 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016550 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016819 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.016959 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxnw\" (UniqueName: \"kubernetes.io/projected/c2182168-2683-42dd-abfc-1d19d9079ca6-kube-api-access-llxnw\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.118902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.118979 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxnw\" (UniqueName: \"kubernetes.io/projected/c2182168-2683-42dd-abfc-1d19d9079ca6-kube-api-access-llxnw\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119087 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119109 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119162 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.119184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.120260 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.120737 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-config\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.120927 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2182168-2683-42dd-abfc-1d19d9079ca6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.125056 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.125434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.125731 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.125777 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b85aec3eced3631c6fe6ea9aaf4f92ebc7bb33b87cf76d22f00b926df484078e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.130160 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2182168-2683-42dd-abfc-1d19d9079ca6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.138997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxnw\" (UniqueName: \"kubernetes.io/projected/c2182168-2683-42dd-abfc-1d19d9079ca6-kube-api-access-llxnw\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.175488 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74c9a77b-cc64-4446-9c1c-fa732fc31344\") pod \"ovsdbserver-nb-0\" (UID: \"c2182168-2683-42dd-abfc-1d19d9079ca6\") " pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:45 crc kubenswrapper[4740]: I0130 16:15:45.197016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.479857 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.482474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.490257 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.516410 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.516636 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4cgtk" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.516794 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.517034 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.584575 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.584676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.584769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.584855 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.584914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.585106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.585408 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-config\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.585518 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k64b\" (UniqueName: \"kubernetes.io/projected/36bbce3a-c121-4811-9a61-ab05b62dce0b-kube-api-access-7k64b\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687334 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-config\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k64b\" (UniqueName: \"kubernetes.io/projected/36bbce3a-c121-4811-9a61-ab05b62dce0b-kube-api-access-7k64b\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687555 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687597 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.687612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.688031 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.689067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-config\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.689109 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36bbce3a-c121-4811-9a61-ab05b62dce0b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.693574 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.693621 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1b8e101d0d5aa547913b0ac1ebccfa62765c73c38ef0120bba94e40b63a88db5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.696194 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.699216 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.706309 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bbce3a-c121-4811-9a61-ab05b62dce0b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.709818 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k64b\" (UniqueName: \"kubernetes.io/projected/36bbce3a-c121-4811-9a61-ab05b62dce0b-kube-api-access-7k64b\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.738102 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5066a02c-58ec-4425-8822-1f0d7052f2b3\") pod \"ovsdbserver-sb-0\" (UID: \"36bbce3a-c121-4811-9a61-ab05b62dce0b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:48 crc kubenswrapper[4740]: I0130 16:15:48.846664 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.200964 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.202886 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.204863 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.205234 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.205379 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-nzkzr" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.205955 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.206335 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.215273 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.269551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.269664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.269712 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.269831 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspf6\" (UniqueName: \"kubernetes.io/projected/2614d072-47f4-4ed5-bfca-df4e1c46c665-kube-api-access-sspf6\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.269864 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.372211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.372299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.372336 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.372392 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sspf6\" (UniqueName: \"kubernetes.io/projected/2614d072-47f4-4ed5-bfca-df4e1c46c665-kube-api-access-sspf6\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.372423 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.373534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.379156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2614d072-47f4-4ed5-bfca-df4e1c46c665-config\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.392562 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.392825 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/2614d072-47f4-4ed5-bfca-df4e1c46c665-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.399201 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.400865 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.407244 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sspf6\" (UniqueName: \"kubernetes.io/projected/2614d072-47f4-4ed5-bfca-df4e1c46c665-kube-api-access-sspf6\") pod \"cloudkitty-lokistack-distributor-66dfd9bb-ln5c7\" (UID: \"2614d072-47f4-4ed5-bfca-df4e1c46c665\") " pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.408086 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.408337 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.408464 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.445958 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.452810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09f1ea51-a4df-41eb-a996-f19303114474","Type":"ContainerStarted","Data":"2f63fce788fd92e3e9a9f86f1fdf81c7ff942589915b80b76392b65523609e16"} Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475167 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsq8z\" (UniqueName: \"kubernetes.io/projected/471174e9-72cd-40a9-8502-103a233c0dbe-kube-api-access-qsq8z\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475368 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475447 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475490 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.475553 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.525407 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.546274 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.571361 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.581725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.581804 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.581856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.581930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.581999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsq8z\" (UniqueName: \"kubernetes.io/projected/471174e9-72cd-40a9-8502-103a233c0dbe-kube-api-access-qsq8z\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.582035 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.583037 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.583591 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-config\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.583978 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.584482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.588617 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.602286 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.620463 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsq8z\" (UniqueName: \"kubernetes.io/projected/471174e9-72cd-40a9-8502-103a233c0dbe-kube-api-access-qsq8z\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.621082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.630343 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/471174e9-72cd-40a9-8502-103a233c0dbe-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-795fd8f8cc-z6wx2\" (UID: \"471174e9-72cd-40a9-8502-103a233c0dbe\") " pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.675045 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.676435 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.679014 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.679164 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.679265 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.680948 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.681751 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.681770 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.681816 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-qtgpv" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.683477 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.683520 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.683555 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.683587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzn4f\" (UniqueName: \"kubernetes.io/projected/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-kube-api-access-jzn4f\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.683619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.706439 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.723093 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.724678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.747305 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt"] Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.785625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.785715 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4kn\" (UniqueName: \"kubernetes.io/projected/d46b15b9-9ad3-4699-9358-44d48e09f824-kube-api-access-fc4kn\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786246 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786304 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786411 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786488 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786542 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.786963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.787015 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.787871 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.787936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.787970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzn4f\" (UniqueName: \"kubernetes.io/projected/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-kube-api-access-jzn4f\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.787971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-config\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788305 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788523 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbc9\" (UniqueName: \"kubernetes.io/projected/e2829a20-2177-481a-9a86-73f8bb323661-kube-api-access-hdbc9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.788676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.792295 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.796030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.805166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.806567 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzn4f\" (UniqueName: \"kubernetes.io/projected/ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2-kube-api-access-jzn4f\") pod \"cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4\" (UID: \"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2\") " pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.891784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.891858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4kn\" (UniqueName: \"kubernetes.io/projected/d46b15b9-9ad3-4699-9358-44d48e09f824-kube-api-access-fc4kn\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.891892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.891924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.891958 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: E0130 16:15:52.892761 4740 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.892823 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: E0130 16:15:52.892851 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret podName:e2829a20-2177-481a-9a86-73f8bb323661 nodeName:}" failed. No retries permitted until 2026-01-30 16:15:53.392827571 +0000 UTC m=+1202.029890170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" (UID: "e2829a20-2177-481a-9a86-73f8bb323661") : secret "cloudkitty-lokistack-gateway-http" not found Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893491 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893534 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893599 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893656 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893716 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893746 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbc9\" (UniqueName: \"kubernetes.io/projected/e2829a20-2177-481a-9a86-73f8bb323661-kube-api-access-hdbc9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893778 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893798 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893867 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.893899 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: E0130 16:15:52.894983 4740 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Jan 30 16:15:52 crc kubenswrapper[4740]: E0130 16:15:52.895076 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret podName:d46b15b9-9ad3-4699-9358-44d48e09f824 nodeName:}" failed. No retries permitted until 2026-01-30 16:15:53.395047207 +0000 UTC m=+1202.032109816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret") pod "cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" (UID: "d46b15b9-9ad3-4699-9358-44d48e09f824") : secret "cloudkitty-lokistack-gateway-http" not found Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.892927 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.895067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.895942 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.896093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.896217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.897682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.899230 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.899585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.900099 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-rbac\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.900184 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.900462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.900659 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tenants\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.908259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2829a20-2177-481a-9a86-73f8bb323661-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.908366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d46b15b9-9ad3-4699-9358-44d48e09f824-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.914054 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4kn\" (UniqueName: \"kubernetes.io/projected/d46b15b9-9ad3-4699-9358-44d48e09f824-kube-api-access-fc4kn\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.917943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbc9\" (UniqueName: \"kubernetes.io/projected/e2829a20-2177-481a-9a86-73f8bb323661-kube-api-access-hdbc9\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:52 crc kubenswrapper[4740]: I0130 16:15:52.970750 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.359431 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.364777 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.367904 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.368085 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.368634 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.416079 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.419308 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.424060 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d46b15b9-9ad3-4699-9358-44d48e09f824-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt\" (UID: \"d46b15b9-9ad3-4699-9358-44d48e09f824\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.425998 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e2829a20-2177-481a-9a86-73f8bb323661-tls-secret\") pod \"cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2\" (UID: \"e2829a20-2177-481a-9a86-73f8bb323661\") " pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.493599 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.495123 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.498316 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.498696 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521029 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521130 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521206 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521239 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521267 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521377 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvh4x\" (UniqueName: \"kubernetes.io/projected/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-kube-api-access-tvh4x\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.521429 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.531031 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.592678 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.594594 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.598048 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.601753 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.618322 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.620170 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-qtgpv" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.622881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623000 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623049 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623118 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvh4x\" (UniqueName: \"kubernetes.io/projected/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-kube-api-access-tvh4x\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.623183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.624337 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.634500 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.636241 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.670276 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.675481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.676228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.677522 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.687126 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.705442 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753421 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753524 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753595 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753665 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753885 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753941 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.753995 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.754036 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zcb\" (UniqueName: \"kubernetes.io/projected/96208f50-7c8d-49c1-b235-def86e2ea52d-kube-api-access-w8zcb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.754084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.754116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trftr\" (UniqueName: \"kubernetes.io/projected/770634d4-2799-4d23-b96d-9f7fa5286e72-kube-api-access-trftr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.754193 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.760546 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.794657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvh4x\" (UniqueName: \"kubernetes.io/projected/3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1-kube-api-access-tvh4x\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.803720 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.806164 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1\") " pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.871138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.871774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.871945 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.871480 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872160 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8zcb\" (UniqueName: \"kubernetes.io/projected/96208f50-7c8d-49c1-b235-def86e2ea52d-kube-api-access-w8zcb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872472 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872528 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trftr\" (UniqueName: \"kubernetes.io/projected/770634d4-2799-4d23-b96d-9f7fa5286e72-kube-api-access-trftr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872654 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872843 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872924 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.872960 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.873009 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.873073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.873161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.874163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.875030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.875627 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.875909 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96208f50-7c8d-49c1-b235-def86e2ea52d-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.876945 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770634d4-2799-4d23-b96d-9f7fa5286e72-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.879339 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.880335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.880367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.881097 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.882030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/96208f50-7c8d-49c1-b235-def86e2ea52d-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.893086 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/770634d4-2799-4d23-b96d-9f7fa5286e72-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.897091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trftr\" (UniqueName: \"kubernetes.io/projected/770634d4-2799-4d23-b96d-9f7fa5286e72-kube-api-access-trftr\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.903336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8zcb\" (UniqueName: \"kubernetes.io/projected/96208f50-7c8d-49c1-b235-def86e2ea52d-kube-api-access-w8zcb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.904738 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"770634d4-2799-4d23-b96d-9f7fa5286e72\") " pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:53 crc kubenswrapper[4740]: I0130 16:15:53.912082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"96208f50-7c8d-49c1-b235-def86e2ea52d\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.010682 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.057110 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.167131 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.455391 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.456963 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.457155 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.458788 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:15:54 crc kubenswrapper[4740]: I0130 16:15:54.459016 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11" gracePeriod=600 Jan 30 16:15:55 crc kubenswrapper[4740]: I0130 16:15:55.531743 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11" exitCode=0 Jan 30 16:15:55 crc kubenswrapper[4740]: I0130 16:15:55.531796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11"} Jan 30 16:15:55 crc kubenswrapper[4740]: I0130 16:15:55.531847 4740 scope.go:117] "RemoveContainer" containerID="54a3dc50e2178ac6be5a1090a31fc5146169210c340898f9c81cac9ad152568a" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.064159 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.065447 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qs5n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-88hc8_openstack(a66a2899-88ba-4309-8671-694d1f29704f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.066671 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" podUID="a66a2899-88ba-4309-8671-694d1f29704f" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.968847 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.969529 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fmfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zs5g8_openstack(4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:01 crc kubenswrapper[4740]: E0130 16:16:01.970618 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.200046 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.261736 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs5n5\" (UniqueName: \"kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5\") pod \"a66a2899-88ba-4309-8671-694d1f29704f\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.262247 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config\") pod \"a66a2899-88ba-4309-8671-694d1f29704f\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.262457 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc\") pod \"a66a2899-88ba-4309-8671-694d1f29704f\" (UID: \"a66a2899-88ba-4309-8671-694d1f29704f\") " Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.262924 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config" (OuterVolumeSpecName: "config") pod "a66a2899-88ba-4309-8671-694d1f29704f" (UID: "a66a2899-88ba-4309-8671-694d1f29704f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.263046 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a66a2899-88ba-4309-8671-694d1f29704f" (UID: "a66a2899-88ba-4309-8671-694d1f29704f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.268848 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5" (OuterVolumeSpecName: "kube-api-access-qs5n5") pod "a66a2899-88ba-4309-8671-694d1f29704f" (UID: "a66a2899-88ba-4309-8671-694d1f29704f"). InnerVolumeSpecName "kube-api-access-qs5n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.365493 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs5n5\" (UniqueName: \"kubernetes.io/projected/a66a2899-88ba-4309-8671-694d1f29704f-kube-api-access-qs5n5\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.365543 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.365559 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a66a2899-88ba-4309-8671-694d1f29704f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:02 crc kubenswrapper[4740]: E0130 16:16:02.407422 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:16:02 crc kubenswrapper[4740]: E0130 16:16:02.407685 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtjlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-hpm8b_openstack(4f91b4d2-b91b-427d-93a5-473f7d477294): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:02 crc kubenswrapper[4740]: E0130 16:16:02.409099 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.563041 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.605040 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" event={"ID":"a66a2899-88ba-4309-8671-694d1f29704f","Type":"ContainerDied","Data":"2a257a99fa8e0fdb94c7533ba88e2498609c52ff80fd24fa60675c3875f48492"} Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.605121 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-88hc8" Jan 30 16:16:02 crc kubenswrapper[4740]: E0130 16:16:02.609191 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" Jan 30 16:16:02 crc kubenswrapper[4740]: E0130 16:16:02.610282 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" Jan 30 16:16:02 crc kubenswrapper[4740]: W0130 16:16:02.753691 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfabe06a_6c42_4191_b819_db7e22a9ea6b.slice/crio-e6e2999b066a78e5c5eb79fc9771a5329f5810cc795f40d1958c0c433cd5988a WatchSource:0}: Error finding container e6e2999b066a78e5c5eb79fc9771a5329f5810cc795f40d1958c0c433cd5988a: Status 404 returned error can't find the container with id e6e2999b066a78e5c5eb79fc9771a5329f5810cc795f40d1958c0c433cd5988a Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.805853 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:16:02 crc kubenswrapper[4740]: I0130 16:16:02.820978 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-88hc8"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.120629 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2"] Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.127507 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod471174e9_72cd_40a9_8502_103a233c0dbe.slice/crio-cae5fc34e128c94dafec5ba40fc1ce6db048436638bde4d0aae9a42a6e39892b WatchSource:0}: Error finding container cae5fc34e128c94dafec5ba40fc1ce6db048436638bde4d0aae9a42a6e39892b: Status 404 returned error can't find the container with id cae5fc34e128c94dafec5ba40fc1ce6db048436638bde4d0aae9a42a6e39892b Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.386408 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66a2899-88ba-4309-8671-694d1f29704f" path="/var/lib/kubelet/pods/a66a2899-88ba-4309-8671-694d1f29704f/volumes" Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.538716 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.558118 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.562226 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96208f50_7c8d_49c1_b235_def86e2ea52d.slice/crio-407f29aaa0dc57622d9db2db008b265b2ee61af7a6567cd45d6baaa692404839 WatchSource:0}: Error finding container 407f29aaa0dc57622d9db2db008b265b2ee61af7a6567cd45d6baaa692404839: Status 404 returned error can't find the container with id 407f29aaa0dc57622d9db2db008b265b2ee61af7a6567cd45d6baaa692404839 Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.570728 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.583101 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.591631 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.600497 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.614976 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"96208f50-7c8d-49c1-b235-def86e2ea52d","Type":"ContainerStarted","Data":"407f29aaa0dc57622d9db2db008b265b2ee61af7a6567cd45d6baaa692404839"} Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.617579 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" event={"ID":"471174e9-72cd-40a9-8502-103a233c0dbe","Type":"ContainerStarted","Data":"cae5fc34e128c94dafec5ba40fc1ce6db048436638bde4d0aae9a42a6e39892b"} Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.619423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerStarted","Data":"e6e2999b066a78e5c5eb79fc9771a5329f5810cc795f40d1958c0c433cd5988a"} Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.620816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1","Type":"ContainerStarted","Data":"ade033ddd501d1dfe89cc1272c8201f952617f2e38b8c0a1befadc522468daa6"} Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.640943 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cc1f1a_e021_42dd_b435_64eaf9cfa1d7.slice/crio-1ffa5f9d19576cdbfd091eea338581180cdf017d80b9540fa5a5243689b146af WatchSource:0}: Error finding container 1ffa5f9d19576cdbfd091eea338581180cdf017d80b9540fa5a5243689b146af: Status 404 returned error can't find the container with id 1ffa5f9d19576cdbfd091eea338581180cdf017d80b9540fa5a5243689b146af Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.663488 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8ee026b_f6be_4d78_adf8_eaa7c77e1e00.slice/crio-ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8 WatchSource:0}: Error finding container ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8: Status 404 returned error can't find the container with id ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8 Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.663863 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2"] Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.668115 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e7ebfc_24f9_4946_aace_c402546d5a60.slice/crio-d6e234ba66fa486ca7721adb0f1622961ba8ed753ccfad1f1ad4c72b083fdc3e WatchSource:0}: Error finding container d6e234ba66fa486ca7721adb0f1622961ba8ed753ccfad1f1ad4c72b083fdc3e: Status 404 returned error can't find the container with id d6e234ba66fa486ca7721adb0f1622961ba8ed753ccfad1f1ad4c72b083fdc3e Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.673552 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2829a20_2177_481a_9a86_73f8bb323661.slice/crio-255e5f66090bbacb2d47ef64d4df360b6ff3a26968a37be829436d210cc19164 WatchSource:0}: Error finding container 255e5f66090bbacb2d47ef64d4df360b6ff3a26968a37be829436d210cc19164: Status 404 returned error can't find the container with id 255e5f66090bbacb2d47ef64d4df360b6ff3a26968a37be829436d210cc19164 Jan 30 16:16:03 crc kubenswrapper[4740]: W0130 16:16:03.955845 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod483203e9_89d7_4b67_b0b9_d0bda08469da.slice/crio-2fe23e7abba1dc3ef7a761da91e4f423c78203eab9b7747aebb1e26e97a85ee9 WatchSource:0}: Error finding container 2fe23e7abba1dc3ef7a761da91e4f423c78203eab9b7747aebb1e26e97a85ee9: Status 404 returned error can't find the container with id 2fe23e7abba1dc3ef7a761da91e4f423c78203eab9b7747aebb1e26e97a85ee9 Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.966744 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.976399 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.984065 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.992595 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7"] Jan 30 16:16:03 crc kubenswrapper[4740]: I0130 16:16:03.998865 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4"] Jan 30 16:16:04 crc kubenswrapper[4740]: E0130 16:16:04.325616 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 16:16:04 crc kubenswrapper[4740]: E0130 16:16:04.325905 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p6zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3aae2bad-ea00-4d1f-a30f-a8891e15ad05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:04 crc kubenswrapper[4740]: E0130 16:16:04.327166 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.631312 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"770634d4-2799-4d23-b96d-9f7fa5286e72","Type":"ContainerStarted","Data":"9465c6a0ed6cff22b1e1b055bf8afb087c3be83330a4ce753054d6edf06f7fe0"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.632730 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm" event={"ID":"25c16e6c-3931-4064-bf64-baf0759712a5","Type":"ContainerStarted","Data":"64f7b999f117832d4c8484832d8b252c3899b8a62c241a263c21c1e46f29e9f8"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.634445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" event={"ID":"d46b15b9-9ad3-4699-9358-44d48e09f824","Type":"ContainerStarted","Data":"3b43a72d6e49b0232636a36a5c04f696e0f919fb301068feef08f47a00cd8ce2"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.635464 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00","Type":"ContainerStarted","Data":"ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.636428 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"47e7ebfc-24f9-4946-aace-c402546d5a60","Type":"ContainerStarted","Data":"d6e234ba66fa486ca7721adb0f1622961ba8ed753ccfad1f1ad4c72b083fdc3e"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.637375 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"483203e9-89d7-4b67-b0b9-d0bda08469da","Type":"ContainerStarted","Data":"2fe23e7abba1dc3ef7a761da91e4f423c78203eab9b7747aebb1e26e97a85ee9"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.639196 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.641973 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" event={"ID":"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2","Type":"ContainerStarted","Data":"f371e520f68e5592d8b1973b47b05b748c7889a081751705b4bb942ffaee932b"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.643502 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" event={"ID":"e2829a20-2177-481a-9a86-73f8bb323661","Type":"ContainerStarted","Data":"255e5f66090bbacb2d47ef64d4df360b6ff3a26968a37be829436d210cc19164"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.644368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" event={"ID":"2614d072-47f4-4ed5-bfca-df4e1c46c665","Type":"ContainerStarted","Data":"5fe0a3bd2ac28da0e4df725d9a7591eab6e63487e633eb3bd56cdd59cc193461"} Jan 30 16:16:04 crc kubenswrapper[4740]: I0130 16:16:04.645315 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7","Type":"ContainerStarted","Data":"1ffa5f9d19576cdbfd091eea338581180cdf017d80b9540fa5a5243689b146af"} Jan 30 16:16:04 crc kubenswrapper[4740]: E0130 16:16:04.647525 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.153786 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.154909 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxb2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(860fd88f-2b83-4fc3-8411-7d10dc1281b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.156435 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.245441 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.245849 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvljq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hs28x_openstack(7127e217-781c-4e11-b832-708388dfe45c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.247235 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" podUID="7127e217-781c-4e11-b832-708388dfe45c" Jan 30 16:16:07 crc kubenswrapper[4740]: E0130 16:16:07.679245 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" Jan 30 16:16:07 crc kubenswrapper[4740]: I0130 16:16:07.815614 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7wnqc"] Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.147476 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.206367 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvljq\" (UniqueName: \"kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq\") pod \"7127e217-781c-4e11-b832-708388dfe45c\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.206501 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config\") pod \"7127e217-781c-4e11-b832-708388dfe45c\" (UID: \"7127e217-781c-4e11-b832-708388dfe45c\") " Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.207263 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config" (OuterVolumeSpecName: "config") pod "7127e217-781c-4e11-b832-708388dfe45c" (UID: "7127e217-781c-4e11-b832-708388dfe45c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.213817 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq" (OuterVolumeSpecName: "kube-api-access-lvljq") pod "7127e217-781c-4e11-b832-708388dfe45c" (UID: "7127e217-781c-4e11-b832-708388dfe45c"). InnerVolumeSpecName "kube-api-access-lvljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.309218 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127e217-781c-4e11-b832-708388dfe45c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.309263 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvljq\" (UniqueName: \"kubernetes.io/projected/7127e217-781c-4e11-b832-708388dfe45c-kube-api-access-lvljq\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.386784 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.493715 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.685125 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" event={"ID":"7127e217-781c-4e11-b832-708388dfe45c","Type":"ContainerDied","Data":"f5734ab1c1f574df7fa8259a05a2d2c9e5c02d81ad449602f536630f97d1f4c8"} Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.685250 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hs28x" Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.689920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"36bbce3a-c121-4811-9a61-ab05b62dce0b","Type":"ContainerStarted","Data":"61d33a56b486016024d75138198aa4d8faeb3c5c3f56caed946f3b55e9dac61e"} Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.692828 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7wnqc" event={"ID":"81f43ac7-ed84-4eff-af70-47991eaab066","Type":"ContainerStarted","Data":"a2958f1a20af836fa5efb5cc24b85fcc5dde9f5950dbfae5e56b47450025a080"} Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.694367 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2182168-2683-42dd-abfc-1d19d9079ca6","Type":"ContainerStarted","Data":"9138fc42d5cfa783246cb687f32b58a159cd481ac58355733f8265b5a00a7f05"} Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.780648 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:16:08 crc kubenswrapper[4740]: I0130 16:16:08.791535 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hs28x"] Jan 30 16:16:09 crc kubenswrapper[4740]: I0130 16:16:09.351702 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7127e217-781c-4e11-b832-708388dfe45c" path="/var/lib/kubelet/pods/7127e217-781c-4e11-b832-708388dfe45c/volumes" Jan 30 16:16:20 crc kubenswrapper[4740]: E0130 16:16:20.418371 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d" Jan 30 16:16:20 crc kubenswrapper[4740]: E0130 16:16:20.419363 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8zcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(96208f50-7c8d-49c1-b235-def86e2ea52d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:16:20 crc kubenswrapper[4740]: E0130 16:16:20.420827 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="96208f50-7c8d-49c1-b235-def86e2ea52d" Jan 30 16:16:20 crc kubenswrapper[4740]: E0130 16:16:20.815589 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="96208f50-7c8d-49c1-b235-def86e2ea52d" Jan 30 16:16:22 crc kubenswrapper[4740]: E0130 16:16:22.212953 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d" Jan 30 16:16:22 crc kubenswrapper[4740]: E0130 16:16:22.214534 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2b491fcb180423632d30811515a439a7a7f41023c1cfe4780647f18969b85a1d,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzn4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4_openstack(ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:16:22 crc kubenswrapper[4740]: E0130 16:16:22.215794 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" podUID="ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2" Jan 30 16:16:28 crc kubenswrapper[4740]: E0130 16:16:28.177255 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 16:16:28 crc kubenswrapper[4740]: E0130 16:16:28.178150 4740 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 16:16:28 crc kubenswrapper[4740]: E0130 16:16:28.178338 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8cc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(a8ee026b-f6be-4d78-adf8-eaa7c77e1e00): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:16:28 crc kubenswrapper[4740]: E0130 16:16:28.180520 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.912981 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09f1ea51-a4df-41eb-a996-f19303114474","Type":"ContainerStarted","Data":"203c2709c88a79a29910cfa6ed3c3a2cb4779a618507daa9074d8e993b819694"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.914729 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerStarted","Data":"18830ee869670f7ca9913ca98acb195b4ffa37625511af68394e4945e405be8b"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.916771 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerStarted","Data":"9c5383bc7a9fd9a7eb8cc88dd1a216cd4547e09dff876e8f0bc0bc92048a1f2c"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.919370 4740 generic.go:334] "Generic (PLEG): container finished" podID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerID="74e0946b05125a3d953964096f314b8e835248097cba0b8a555aacb26a0b4b00" exitCode=0 Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.919423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" event={"ID":"4f91b4d2-b91b-427d-93a5-473f7d477294","Type":"ContainerDied","Data":"74e0946b05125a3d953964096f314b8e835248097cba0b8a555aacb26a0b4b00"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.922882 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2182168-2683-42dd-abfc-1d19d9079ca6","Type":"ContainerStarted","Data":"646d65e9a56031f44584a3f3e40911784b25e7e3a524304abf26cab736ac0e84"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.927070 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"483203e9-89d7-4b67-b0b9-d0bda08469da","Type":"ContainerStarted","Data":"84dcc19e76bb8d8bbae7e8a65df70b8ad7b5e15b80bceadfa923918eac0aca0a"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.933779 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" event={"ID":"e2829a20-2177-481a-9a86-73f8bb323661","Type":"ContainerStarted","Data":"2678d4e75cb6440c57d557665faf4ac875c80cf45633c99b73cce0499320aaca"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.934677 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.936724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" event={"ID":"471174e9-72cd-40a9-8502-103a233c0dbe","Type":"ContainerStarted","Data":"33843a29846779506cc76f1dc6c7da3dfc01402cf971696856f7d973a89605ff"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.938262 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.948720 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1","Type":"ContainerStarted","Data":"cf8ba400f487bfbcab5e15d7528d15a0a9a0f178c6b486508ad2fbfcfb6b9dc5"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.948813 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.953187 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"36bbce3a-c121-4811-9a61-ab05b62dce0b","Type":"ContainerStarted","Data":"1cdcbb68a9befdcc6f971c26281d04651a5dbdc9bd2e0526898b963a7a1ba2f0"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.960493 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" event={"ID":"2614d072-47f4-4ed5-bfca-df4e1c46c665","Type":"ContainerStarted","Data":"4946ab08ef6231addca269468f9f22c8f9017f099f435b0f4a10eb2f4ca8c589"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.960654 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.963016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"20cc1f1a-e021-42dd-b435-64eaf9cfa1d7","Type":"ContainerStarted","Data":"74a03d54332df88f206e717e8aee0b038b3741832cb3ca5d3d7d1b06d0ff73fb"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.964014 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.968084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"47e7ebfc-24f9-4946-aace-c402546d5a60","Type":"ContainerStarted","Data":"2364faa2402c96cf931c2b350faef26d988566032895e61a0ddee501ab469074"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.972916 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" event={"ID":"ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2","Type":"ContainerStarted","Data":"401a470261812c95c5d949643ea4f24fe3de5391534d00d8d2aa04d1efd1632e"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.973928 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.975854 4740 generic.go:334] "Generic (PLEG): container finished" podID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerID="6a5111494e387970f12cbdc690abf2494542224f3528c3669280bf9245455564" exitCode=0 Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.975919 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" event={"ID":"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd","Type":"ContainerDied","Data":"6a5111494e387970f12cbdc690abf2494542224f3528c3669280bf9245455564"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.978534 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm" event={"ID":"25c16e6c-3931-4064-bf64-baf0759712a5","Type":"ContainerStarted","Data":"09d23d23e39bb3eae395d2c843436cf2b72e4dd91e77ab16d60058a17e9ab4ed"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.978693 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8vhhm" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.983906 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.990163 4740 generic.go:334] "Generic (PLEG): container finished" podID="81f43ac7-ed84-4eff-af70-47991eaab066" containerID="f5d4ab91d737711348857fc8b1bcff38c3b1cb98aaabd3af0ff88b189e66f01e" exitCode=0 Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.990262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7wnqc" event={"ID":"81f43ac7-ed84-4eff-af70-47991eaab066","Type":"ContainerDied","Data":"f5d4ab91d737711348857fc8b1bcff38c3b1cb98aaabd3af0ff88b189e66f01e"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.995167 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" event={"ID":"d46b15b9-9ad3-4699-9358-44d48e09f824","Type":"ContainerStarted","Data":"23f2097d5e3799229153d1524cedc6a7de537bff504462ce25cc4ced08ce128c"} Jan 30 16:16:28 crc kubenswrapper[4740]: I0130 16:16:28.997249 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.009149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"770634d4-2799-4d23-b96d-9f7fa5286e72","Type":"ContainerStarted","Data":"c270ba0ece0dd00c2593cbd85c132544eed40566b7c217998a04f477a0afd053"} Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.009207 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:16:29 crc kubenswrapper[4740]: E0130 16:16:29.010253 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.016665 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.040853 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2" podStartSLOduration=17.541838459 podStartE2EDuration="37.040830467s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.677996652 +0000 UTC m=+1212.315059251" lastFinishedPulling="2026-01-30 16:16:23.17698866 +0000 UTC m=+1231.814051259" observedRunningTime="2026-01-30 16:16:29.033501525 +0000 UTC m=+1237.670564134" watchObservedRunningTime="2026-01-30 16:16:29.040830467 +0000 UTC m=+1237.677893066" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.071655 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=17.345918437999998 podStartE2EDuration="37.071620352s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.569955762 +0000 UTC m=+1212.207018361" lastFinishedPulling="2026-01-30 16:16:23.295657666 +0000 UTC m=+1231.932720275" observedRunningTime="2026-01-30 16:16:29.066489914 +0000 UTC m=+1237.703552533" watchObservedRunningTime="2026-01-30 16:16:29.071620352 +0000 UTC m=+1237.708682951" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.108180 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" podStartSLOduration=17.467772946 podStartE2EDuration="37.108153609s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.129907906 +0000 UTC m=+1211.766970505" lastFinishedPulling="2026-01-30 16:16:22.770288569 +0000 UTC m=+1231.407351168" observedRunningTime="2026-01-30 16:16:29.103638627 +0000 UTC m=+1237.740701236" watchObservedRunningTime="2026-01-30 16:16:29.108153609 +0000 UTC m=+1237.745216218" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.195030 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" podStartSLOduration=16.74467252 podStartE2EDuration="37.195006035s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.978724809 +0000 UTC m=+1212.615787408" lastFinishedPulling="2026-01-30 16:16:24.429058294 +0000 UTC m=+1233.066120923" observedRunningTime="2026-01-30 16:16:29.193532789 +0000 UTC m=+1237.830595398" watchObservedRunningTime="2026-01-30 16:16:29.195006035 +0000 UTC m=+1237.832068634" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.257326 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=16.781947545 podStartE2EDuration="37.257296882s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.953687846 +0000 UTC m=+1212.590750445" lastFinishedPulling="2026-01-30 16:16:24.429037143 +0000 UTC m=+1233.066099782" observedRunningTime="2026-01-30 16:16:29.219449052 +0000 UTC m=+1237.856511661" watchObservedRunningTime="2026-01-30 16:16:29.257296882 +0000 UTC m=+1237.894359481" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.296648 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt" podStartSLOduration=18.876514496 podStartE2EDuration="37.296620068s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.977717134 +0000 UTC m=+1212.614779723" lastFinishedPulling="2026-01-30 16:16:22.397822706 +0000 UTC m=+1231.034885295" observedRunningTime="2026-01-30 16:16:29.288979928 +0000 UTC m=+1237.926042537" watchObservedRunningTime="2026-01-30 16:16:29.296620068 +0000 UTC m=+1237.933682667" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.331071 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=32.613235638 podStartE2EDuration="51.331044122s" podCreationTimestamp="2026-01-30 16:15:38 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.64379143 +0000 UTC m=+1212.280854029" lastFinishedPulling="2026-01-30 16:16:22.361599914 +0000 UTC m=+1230.998662513" observedRunningTime="2026-01-30 16:16:29.315807284 +0000 UTC m=+1237.952869883" watchObservedRunningTime="2026-01-30 16:16:29.331044122 +0000 UTC m=+1237.968106721" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.346568 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8vhhm" podStartSLOduration=24.476578522 podStartE2EDuration="45.346542817s" podCreationTimestamp="2026-01-30 16:15:44 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.65420468 +0000 UTC m=+1212.291267279" lastFinishedPulling="2026-01-30 16:16:24.524168975 +0000 UTC m=+1233.161231574" observedRunningTime="2026-01-30 16:16:29.340800825 +0000 UTC m=+1237.977863424" watchObservedRunningTime="2026-01-30 16:16:29.346542817 +0000 UTC m=+1237.983605416" Jan 30 16:16:29 crc kubenswrapper[4740]: I0130 16:16:29.376644 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" podStartSLOduration=-9223371999.478153 podStartE2EDuration="37.376622214s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.976872493 +0000 UTC m=+1212.613935092" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:29.372692746 +0000 UTC m=+1238.009755345" watchObservedRunningTime="2026-01-30 16:16:29.376622214 +0000 UTC m=+1238.013684803" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.040153 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" event={"ID":"4f91b4d2-b91b-427d-93a5-473f7d477294","Type":"ContainerStarted","Data":"a8167b003fa720f297973fe0b9df29f346e6e5de6f08d1ba6538b7fec7f75066"} Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.040968 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.046622 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7wnqc" event={"ID":"81f43ac7-ed84-4eff-af70-47991eaab066","Type":"ContainerStarted","Data":"d8189c2a0c86f4b0f8f4048eb208bd26cddc64680c923307ffcd0a13b7ab0815"} Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.046679 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7wnqc" event={"ID":"81f43ac7-ed84-4eff-af70-47991eaab066","Type":"ContainerStarted","Data":"dac3e34d96cb9d3ebc8c057caf9e6b075f63d40984519f6ad9949e1997655e05"} Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.046723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.046822 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.051053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" event={"ID":"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd","Type":"ContainerStarted","Data":"b65963df3245544d5d93f1572a91096f86c1e3aaf09f62e8961cc8c0be40b3fd"} Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.051834 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.057200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerStarted","Data":"811e1ab3fcca6e2edcda7c6cfb4d4afa6d235e9405fe2a573c48bb67dd411c09"} Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.073786 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" podStartSLOduration=9.527116218 podStartE2EDuration="58.073759724s" podCreationTimestamp="2026-01-30 16:15:34 +0000 UTC" firstStartedPulling="2026-01-30 16:15:35.88208532 +0000 UTC m=+1184.519147919" lastFinishedPulling="2026-01-30 16:16:24.428728786 +0000 UTC m=+1233.065791425" observedRunningTime="2026-01-30 16:16:32.066013912 +0000 UTC m=+1240.703076521" watchObservedRunningTime="2026-01-30 16:16:32.073759724 +0000 UTC m=+1240.710822333" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.098519 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7wnqc" podStartSLOduration=33.147801546 podStartE2EDuration="48.098492228s" podCreationTimestamp="2026-01-30 16:15:44 +0000 UTC" firstStartedPulling="2026-01-30 16:16:07.831395621 +0000 UTC m=+1216.468458220" lastFinishedPulling="2026-01-30 16:16:22.782086303 +0000 UTC m=+1231.419148902" observedRunningTime="2026-01-30 16:16:32.09253585 +0000 UTC m=+1240.729598459" watchObservedRunningTime="2026-01-30 16:16:32.098492228 +0000 UTC m=+1240.735554827" Jan 30 16:16:32 crc kubenswrapper[4740]: I0130 16:16:32.153955 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" podStartSLOduration=8.979280889 podStartE2EDuration="58.153929284s" podCreationTimestamp="2026-01-30 16:15:34 +0000 UTC" firstStartedPulling="2026-01-30 16:15:35.345788757 +0000 UTC m=+1183.982851346" lastFinishedPulling="2026-01-30 16:16:24.520437132 +0000 UTC m=+1233.157499741" observedRunningTime="2026-01-30 16:16:32.145280499 +0000 UTC m=+1240.782343098" watchObservedRunningTime="2026-01-30 16:16:32.153929284 +0000 UTC m=+1240.790991883" Jan 30 16:16:33 crc kubenswrapper[4740]: I0130 16:16:33.870471 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 16:16:35 crc kubenswrapper[4740]: I0130 16:16:35.087524 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"96208f50-7c8d-49c1-b235-def86e2ea52d","Type":"ContainerStarted","Data":"247ce71ee81d8e2a908d138dc232ca6471379e612c1e7609b3334d4950878f1c"} Jan 30 16:16:35 crc kubenswrapper[4740]: I0130 16:16:35.118908 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=-9223371993.735899 podStartE2EDuration="43.118877573s" podCreationTimestamp="2026-01-30 16:15:52 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.56505999 +0000 UTC m=+1212.202122589" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:35.116867883 +0000 UTC m=+1243.753930502" watchObservedRunningTime="2026-01-30 16:16:35.118877573 +0000 UTC m=+1243.755940192" Jan 30 16:16:36 crc kubenswrapper[4740]: I0130 16:16:36.099641 4740 generic.go:334] "Generic (PLEG): container finished" podID="47e7ebfc-24f9-4946-aace-c402546d5a60" containerID="2364faa2402c96cf931c2b350faef26d988566032895e61a0ddee501ab469074" exitCode=0 Jan 30 16:16:36 crc kubenswrapper[4740]: I0130 16:16:36.099694 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"47e7ebfc-24f9-4946-aace-c402546d5a60","Type":"ContainerDied","Data":"2364faa2402c96cf931c2b350faef26d988566032895e61a0ddee501ab469074"} Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.122882 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"36bbce3a-c121-4811-9a61-ab05b62dce0b","Type":"ContainerStarted","Data":"3f17878bc4da64e6f4cdc72720bcdc9036d840a6ec9eed8dd4460ba4fbb02686"} Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.126054 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c2182168-2683-42dd-abfc-1d19d9079ca6","Type":"ContainerStarted","Data":"2a7fd0fb7b3d791bd824f3ab68a1f02240e15fbf1028afb7815051171232b949"} Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.146602 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.35637097 podStartE2EDuration="51.146580989s" podCreationTimestamp="2026-01-30 16:15:47 +0000 UTC" firstStartedPulling="2026-01-30 16:16:08.453662534 +0000 UTC m=+1217.090725143" lastFinishedPulling="2026-01-30 16:16:36.243872553 +0000 UTC m=+1244.880935162" observedRunningTime="2026-01-30 16:16:38.140740724 +0000 UTC m=+1246.777803353" watchObservedRunningTime="2026-01-30 16:16:38.146580989 +0000 UTC m=+1246.783643588" Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.166451 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=26.863283212 podStartE2EDuration="55.166430342s" podCreationTimestamp="2026-01-30 16:15:43 +0000 UTC" firstStartedPulling="2026-01-30 16:16:08.497850574 +0000 UTC m=+1217.134913173" lastFinishedPulling="2026-01-30 16:16:36.800997694 +0000 UTC m=+1245.438060303" observedRunningTime="2026-01-30 16:16:38.161679394 +0000 UTC m=+1246.798742033" watchObservedRunningTime="2026-01-30 16:16:38.166430342 +0000 UTC m=+1246.803492941" Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.847044 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.854020 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fpfkt"] Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.855335 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.858484 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 16:16:38 crc kubenswrapper[4740]: I0130 16:16:38.882291 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fpfkt"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.015813 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12656704-b213-40b2-9520-58db055e7380-config\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.015890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.016250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovn-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.016381 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-combined-ca-bundle\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.016430 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovs-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.016534 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcwk\" (UniqueName: \"kubernetes.io/projected/12656704-b213-40b2-9520-58db055e7380-kube-api-access-lvcwk\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.039969 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.040278 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="dnsmasq-dns" containerID="cri-o://a8167b003fa720f297973fe0b9df29f346e6e5de6f08d1ba6538b7fec7f75066" gracePeriod=10 Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.042515 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.079548 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.081540 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.088950 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovn-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-combined-ca-bundle\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovs-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcwk\" (UniqueName: \"kubernetes.io/projected/12656704-b213-40b2-9520-58db055e7380-kube-api-access-lvcwk\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120682 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12656704-b213-40b2-9520-58db055e7380-config\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.120899 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovn-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.121453 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/12656704-b213-40b2-9520-58db055e7380-ovs-rundir\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.121826 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.122091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12656704-b213-40b2-9520-58db055e7380-config\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.129200 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.140147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12656704-b213-40b2-9520-58db055e7380-combined-ca-bundle\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.176245 4740 generic.go:334] "Generic (PLEG): container finished" podID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerID="811e1ab3fcca6e2edcda7c6cfb4d4afa6d235e9405fe2a573c48bb67dd411c09" exitCode=0 Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.176491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerDied","Data":"811e1ab3fcca6e2edcda7c6cfb4d4afa6d235e9405fe2a573c48bb67dd411c09"} Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.178168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcwk\" (UniqueName: \"kubernetes.io/projected/12656704-b213-40b2-9520-58db055e7380-kube-api-access-lvcwk\") pod \"ovn-controller-metrics-fpfkt\" (UID: \"12656704-b213-40b2-9520-58db055e7380\") " pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.198848 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.224769 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.224905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.224938 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcgf\" (UniqueName: \"kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.225187 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.255159 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.311107 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.313944 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="dnsmasq-dns" containerID="cri-o://b65963df3245544d5d93f1572a91096f86c1e3aaf09f62e8961cc8c0be40b3fd" gracePeriod=10 Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.317723 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.340987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.341140 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.341187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbcgf\" (UniqueName: \"kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.392710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.396682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.397063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.406712 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.422786 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbcgf\" (UniqueName: \"kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf\") pod \"dnsmasq-dns-5bf47b49b7-rh924\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.438395 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.453533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.457227 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.476750 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fpfkt" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.477136 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.596921 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.597254 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wjh\" (UniqueName: \"kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.597519 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.597633 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.597745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.621066 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: connect: connection refused" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.699227 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5wjh\" (UniqueName: \"kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.699663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.699682 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.699741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.699799 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.701233 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.701364 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.701516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.701565 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.708691 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.757447 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5wjh\" (UniqueName: \"kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh\") pod \"dnsmasq-dns-8554648995-5n7s2\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.834499 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:39 crc kubenswrapper[4740]: I0130 16:16:39.847142 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.122295 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.198982 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.232251 4740 generic.go:334] "Generic (PLEG): container finished" podID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerID="b65963df3245544d5d93f1572a91096f86c1e3aaf09f62e8961cc8c0be40b3fd" exitCode=0 Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.232409 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" event={"ID":"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd","Type":"ContainerDied","Data":"b65963df3245544d5d93f1572a91096f86c1e3aaf09f62e8961cc8c0be40b3fd"} Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.238665 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fpfkt"] Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.241817 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" event={"ID":"4f91b4d2-b91b-427d-93a5-473f7d477294","Type":"ContainerDied","Data":"a8167b003fa720f297973fe0b9df29f346e6e5de6f08d1ba6538b7fec7f75066"} Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.241751 4740 generic.go:334] "Generic (PLEG): container finished" podID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerID="a8167b003fa720f297973fe0b9df29f346e6e5de6f08d1ba6538b7fec7f75066" exitCode=0 Jan 30 16:16:40 crc kubenswrapper[4740]: W0130 16:16:40.243003 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12656704_b213_40b2_9520_58db055e7380.slice/crio-83cbef18131ae3e282e0347870f0a4813984b9f77186c51a0fe0e4b31b03423c WatchSource:0}: Error finding container 83cbef18131ae3e282e0347870f0a4813984b9f77186c51a0fe0e4b31b03423c: Status 404 returned error can't find the container with id 83cbef18131ae3e282e0347870f0a4813984b9f77186c51a0fe0e4b31b03423c Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.383335 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.404566 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.542289 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.580947 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.619474 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:16:40 crc kubenswrapper[4740]: E0130 16:16:40.620144 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="dnsmasq-dns" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.620181 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="dnsmasq-dns" Jan 30 16:16:40 crc kubenswrapper[4740]: E0130 16:16:40.620240 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="init" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.620250 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="init" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.620536 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="dnsmasq-dns" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.622003 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.644874 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700070 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config\") pod \"4f91b4d2-b91b-427d-93a5-473f7d477294\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700215 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc\") pod \"4f91b4d2-b91b-427d-93a5-473f7d477294\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtjlp\" (UniqueName: \"kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp\") pod \"4f91b4d2-b91b-427d-93a5-473f7d477294\" (UID: \"4f91b4d2-b91b-427d-93a5-473f7d477294\") " Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700527 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700565 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700616 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479mc\" (UniqueName: \"kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700661 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.700686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.708698 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp" (OuterVolumeSpecName: "kube-api-access-dtjlp") pod "4f91b4d2-b91b-427d-93a5-473f7d477294" (UID: "4f91b4d2-b91b-427d-93a5-473f7d477294"). InnerVolumeSpecName "kube-api-access-dtjlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.787939 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f91b4d2-b91b-427d-93a5-473f7d477294" (UID: "4f91b4d2-b91b-427d-93a5-473f7d477294"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.791202 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config" (OuterVolumeSpecName: "config") pod "4f91b4d2-b91b-427d-93a5-473f7d477294" (UID: "4f91b4d2-b91b-427d-93a5-473f7d477294"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801680 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479mc\" (UniqueName: \"kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801807 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801864 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801876 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f91b4d2-b91b-427d-93a5-473f7d477294-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.801887 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtjlp\" (UniqueName: \"kubernetes.io/projected/4f91b4d2-b91b-427d-93a5-473f7d477294-kube-api-access-dtjlp\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.803300 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.803571 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.804127 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.804684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.838289 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479mc\" (UniqueName: \"kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc\") pod \"dnsmasq-dns-b8fbc5445-xnxb4\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:40 crc kubenswrapper[4740]: I0130 16:16:40.974455 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.051213 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:41 crc kubenswrapper[4740]: W0130 16:16:41.055739 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115cf6aa_8302_47ad_80c5_9e05d71cf03b.slice/crio-6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298 WatchSource:0}: Error finding container 6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298: Status 404 returned error can't find the container with id 6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298 Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.085262 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.097884 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.182654 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:16:41 crc kubenswrapper[4740]: E0130 16:16:41.183184 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="dnsmasq-dns" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.183203 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="dnsmasq-dns" Jan 30 16:16:41 crc kubenswrapper[4740]: E0130 16:16:41.183245 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="init" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.183256 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="init" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.183496 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" containerName="dnsmasq-dns" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.184741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.188851 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.189056 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8wg5l" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.189176 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.189312 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.208559 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc\") pod \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.208681 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config\") pod \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.208807 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmfh\" (UniqueName: \"kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh\") pod \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\" (UID: \"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd\") " Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.218060 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.218274 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh" (OuterVolumeSpecName: "kube-api-access-4fmfh") pod "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" (UID: "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd"). InnerVolumeSpecName "kube-api-access-4fmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.276665 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5n7s2" event={"ID":"115cf6aa-8302-47ad-80c5-9e05d71cf03b","Type":"ContainerStarted","Data":"6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298"} Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.276919 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" (UID: "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.280054 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" event={"ID":"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a","Type":"ContainerStarted","Data":"f5d9598f90c26030c38a15cb5cbcd31022ab20aab55b3ce94b78379d7bb9592a"} Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.313376 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" event={"ID":"4f91b4d2-b91b-427d-93a5-473f7d477294","Type":"ContainerDied","Data":"09bcdd6e16cfd697b687e49919dd77dc74e2c35a7ad8d1fdd4ef0b84e32f5fe0"} Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.313446 4740 scope.go:117] "RemoveContainer" containerID="a8167b003fa720f297973fe0b9df29f346e6e5de6f08d1ba6538b7fec7f75066" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.313641 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316661 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-scripts\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316712 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316800 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zxh\" (UniqueName: \"kubernetes.io/projected/5972ae70-676c-4eca-a931-92f76fe6efe5-kube-api-access-d5zxh\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316829 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316909 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-config\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316965 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmfh\" (UniqueName: \"kubernetes.io/projected/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-kube-api-access-4fmfh\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.316976 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.321331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpfkt" event={"ID":"12656704-b213-40b2-9520-58db055e7380","Type":"ContainerStarted","Data":"83cbef18131ae3e282e0347870f0a4813984b9f77186c51a0fe0e4b31b03423c"} Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.340307 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config" (OuterVolumeSpecName: "config") pod "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" (UID: "4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.341426 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.443564 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zs5g8" event={"ID":"4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd","Type":"ContainerDied","Data":"7d401b04099ad7bce890804e472c3c68712cfe73d2335d30c18cc6aa32f46428"} Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.454083 4740 scope.go:117] "RemoveContainer" containerID="74e0946b05125a3d953964096f314b8e835248097cba0b8a555aacb26a0b4b00" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.456632 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-config\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.457792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-config\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.463811 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-scripts\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.463985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.465224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5972ae70-676c-4eca-a931-92f76fe6efe5-scripts\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.473044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.473420 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.474913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zxh\" (UniqueName: \"kubernetes.io/projected/5972ae70-676c-4eca-a931-92f76fe6efe5-kube-api-access-d5zxh\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.475070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.475124 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.475657 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.483344 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.484657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.489477 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.500215 4740 scope.go:117] "RemoveContainer" containerID="b65963df3245544d5d93f1572a91096f86c1e3aaf09f62e8961cc8c0be40b3fd" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.501953 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5972ae70-676c-4eca-a931-92f76fe6efe5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.505254 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zxh\" (UniqueName: \"kubernetes.io/projected/5972ae70-676c-4eca-a931-92f76fe6efe5-kube-api-access-d5zxh\") pod \"ovn-northd-0\" (UID: \"5972ae70-676c-4eca-a931-92f76fe6efe5\") " pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.515318 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hpm8b"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.524422 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.535503 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zs5g8"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.547663 4740 scope.go:117] "RemoveContainer" containerID="6a5111494e387970f12cbdc690abf2494542224f3528c3669280bf9245455564" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.714833 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.722162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.725167 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mvfzj" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.727967 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.728764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.729411 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.740786 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.784319 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.785644 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:16:41 crc kubenswrapper[4740]: W0130 16:16:41.803801 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa91b0b_2b2c_4e4e_8ed0_a51652314374.slice/crio-46db931dc6c44aa8f35a6434e68f8aa8b09e88a52f900e61eec90164afc2eaba WatchSource:0}: Error finding container 46db931dc6c44aa8f35a6434e68f8aa8b09e88a52f900e61eec90164afc2eaba: Status 404 returned error can't find the container with id 46db931dc6c44aa8f35a6434e68f8aa8b09e88a52f900e61eec90164afc2eaba Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.885875 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.886192 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jxz\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-kube-api-access-54jxz\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.886291 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-cache\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.886679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.886751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-lock\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.886815 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ff5548-2e68-494b-b131-2b71eb8c9376-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988710 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54jxz\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-kube-api-access-54jxz\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988795 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-cache\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-lock\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ff5548-2e68-494b-b131-2b71eb8c9376-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.988987 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.990059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-lock\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: I0130 16:16:41.990326 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/75ff5548-2e68-494b-b131-2b71eb8c9376-cache\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:41 crc kubenswrapper[4740]: E0130 16:16:41.990475 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:41 crc kubenswrapper[4740]: E0130 16:16:41.990582 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:41 crc kubenswrapper[4740]: E0130 16:16:41.990710 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:16:42.490679484 +0000 UTC m=+1251.127742083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.007474 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ff5548-2e68-494b-b131-2b71eb8c9376-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.055402 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jxz\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-kube-api-access-54jxz\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.097047 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.097114 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8c2429320cf6ab8d9f8e46a782312f12dd862bbb88230f0d2ceeb2514f71eea2/globalmount\"" pod="openstack/swift-storage-0" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.189430 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bfb6f11-3fa9-47d0-b5b4-151154e2c066\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.364161 4740 generic.go:334] "Generic (PLEG): container finished" podID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerID="0d1cdd41be8ff3a4efdf6fe51af44679cf9bbe1dda3d183bd7b23c41804b2a76" exitCode=0 Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.364278 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5n7s2" event={"ID":"115cf6aa-8302-47ad-80c5-9e05d71cf03b","Type":"ContainerDied","Data":"0d1cdd41be8ff3a4efdf6fe51af44679cf9bbe1dda3d183bd7b23c41804b2a76"} Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.373077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" event={"ID":"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a","Type":"ContainerDied","Data":"9a1fe481978d9870fbaa156d083022d3f7387d776a6964a96eeb6adbee3e21c2"} Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.372899 4740 generic.go:334] "Generic (PLEG): container finished" podID="2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" containerID="9a1fe481978d9870fbaa156d083022d3f7387d776a6964a96eeb6adbee3e21c2" exitCode=0 Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.410882 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" event={"ID":"2fa91b0b-2b2c-4e4e-8ed0-a51652314374","Type":"ContainerStarted","Data":"46db931dc6c44aa8f35a6434e68f8aa8b09e88a52f900e61eec90164afc2eaba"} Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.414283 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fpfkt" event={"ID":"12656704-b213-40b2-9520-58db055e7380","Type":"ContainerStarted","Data":"02473ab99e7c29249181276a8ecee3554c2b6f7c647562305cf407fd2ca6830a"} Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.459014 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fpfkt" podStartSLOduration=4.4589905 podStartE2EDuration="4.4589905s" podCreationTimestamp="2026-01-30 16:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:42.43805416 +0000 UTC m=+1251.075116759" watchObservedRunningTime="2026-01-30 16:16:42.4589905 +0000 UTC m=+1251.096053099" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.537209 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.539900 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.539948 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.540028 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:16:43.540002481 +0000 UTC m=+1252.177065080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.540915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 16:16:42 crc kubenswrapper[4740]: W0130 16:16:42.561511 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5972ae70_676c_4eca_a931_92f76fe6efe5.slice/crio-62cf0b655265b4fe4c25780d15e75945693c8c853e98a38dbbfafb190778fa52 WatchSource:0}: Error finding container 62cf0b655265b4fe4c25780d15e75945693c8c853e98a38dbbfafb190778fa52: Status 404 returned error can't find the container with id 62cf0b655265b4fe4c25780d15e75945693c8c853e98a38dbbfafb190778fa52 Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.742907 4740 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 16:16:42 crc kubenswrapper[4740]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/115cf6aa-8302-47ad-80c5-9e05d71cf03b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 16:16:42 crc kubenswrapper[4740]: > podSandboxID="6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298" Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.743809 4740 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 16:16:42 crc kubenswrapper[4740]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b5wjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8554648995-5n7s2_openstack(115cf6aa-8302-47ad-80c5-9e05d71cf03b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/115cf6aa-8302-47ad-80c5-9e05d71cf03b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 16:16:42 crc kubenswrapper[4740]: > logger="UnhandledError" Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.745099 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/115cf6aa-8302-47ad-80c5-9e05d71cf03b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-8554648995-5n7s2" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.808005 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:42 crc kubenswrapper[4740]: E0130 16:16:42.880513 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115cf6aa_8302_47ad_80c5_9e05d71cf03b.slice/crio-2ee563affd8651e9a14910f003385aa7ed7641df607852de3224fbc41b08c8c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115cf6aa_8302_47ad_80c5_9e05d71cf03b.slice/crio-conmon-2ee563affd8651e9a14910f003385aa7ed7641df607852de3224fbc41b08c8c4.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.952160 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb\") pod \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.952312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc\") pod \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.952558 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbcgf\" (UniqueName: \"kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf\") pod \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.952673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config\") pod \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\" (UID: \"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a\") " Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.960081 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf" (OuterVolumeSpecName: "kube-api-access-dbcgf") pod "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" (UID: "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a"). InnerVolumeSpecName "kube-api-access-dbcgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.979079 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" (UID: "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.983169 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" (UID: "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:42 crc kubenswrapper[4740]: I0130 16:16:42.986252 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config" (OuterVolumeSpecName: "config") pod "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" (UID: "2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.055652 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.055705 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.055726 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.055741 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbcgf\" (UniqueName: \"kubernetes.io/projected/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a-kube-api-access-dbcgf\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.351702 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd" path="/var/lib/kubelet/pods/4aa5a3fa-96d2-4ba2-a265-8a1802ba9cdd/volumes" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.352902 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" path="/var/lib/kubelet/pods/4f91b4d2-b91b-427d-93a5-473f7d477294/volumes" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.431311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5972ae70-676c-4eca-a931-92f76fe6efe5","Type":"ContainerStarted","Data":"62cf0b655265b4fe4c25780d15e75945693c8c853e98a38dbbfafb190778fa52"} Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.436942 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" event={"ID":"2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a","Type":"ContainerDied","Data":"f5d9598f90c26030c38a15cb5cbcd31022ab20aab55b3ce94b78379d7bb9592a"} Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.437001 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-rh924" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.437041 4740 scope.go:117] "RemoveContainer" containerID="9a1fe481978d9870fbaa156d083022d3f7387d776a6964a96eeb6adbee3e21c2" Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.440385 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerID="55e2d4922aacd1b5580a88b81c4e25511c66343df8589fcc559174aab1a8e482" exitCode=0 Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.440892 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" event={"ID":"2fa91b0b-2b2c-4e4e-8ed0-a51652314374","Type":"ContainerDied","Data":"55e2d4922aacd1b5580a88b81c4e25511c66343df8589fcc559174aab1a8e482"} Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.526300 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.538974 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-rh924"] Jan 30 16:16:43 crc kubenswrapper[4740]: I0130 16:16:43.570896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:43 crc kubenswrapper[4740]: E0130 16:16:43.572448 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:43 crc kubenswrapper[4740]: E0130 16:16:43.572496 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:43 crc kubenswrapper[4740]: E0130 16:16:43.572585 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:16:45.572556096 +0000 UTC m=+1254.209618695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.019920 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.057654 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.188014 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.451049 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" event={"ID":"2fa91b0b-2b2c-4e4e-8ed0-a51652314374","Type":"ContainerStarted","Data":"5501f2154f56c5db9a022bd9b55e9851c0a9d308a5a598b35bb802d0310faac0"} Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.451588 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.456630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5n7s2" event={"ID":"115cf6aa-8302-47ad-80c5-9e05d71cf03b","Type":"ContainerStarted","Data":"67affe99b29168960d2036f2d8a898ccd5ddfd4148af8e4e1375c68447a935e9"} Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.456977 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.471975 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" podStartSLOduration=4.471957205 podStartE2EDuration="4.471957205s" podCreationTimestamp="2026-01-30 16:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:44.469640068 +0000 UTC m=+1253.106702687" watchObservedRunningTime="2026-01-30 16:16:44.471957205 +0000 UTC m=+1253.109019804" Jan 30 16:16:44 crc kubenswrapper[4740]: I0130 16:16:44.497749 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-5n7s2" podStartSLOduration=5.497728315 podStartE2EDuration="5.497728315s" podCreationTimestamp="2026-01-30 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:44.491164892 +0000 UTC m=+1253.128227491" watchObservedRunningTime="2026-01-30 16:16:44.497728315 +0000 UTC m=+1253.134790914" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.103109 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-hpm8b" podUID="4f91b4d2-b91b-427d-93a5-473f7d477294" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: i/o timeout" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.351586 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" path="/var/lib/kubelet/pods/2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a/volumes" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.596226 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9q76k"] Jan 30 16:16:45 crc kubenswrapper[4740]: E0130 16:16:45.596742 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" containerName="init" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.596760 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" containerName="init" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.596974 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da7e50c-d9cc-44c2-bd33-f6f1bd3a6c4a" containerName="init" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.597947 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.602710 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.602799 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.602914 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.612899 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9q76k"] Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.630227 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.631707 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.631796 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.631901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2kt6\" (UniqueName: \"kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.631988 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.632124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.632294 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.632367 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: E0130 16:16:45.632714 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:45 crc kubenswrapper[4740]: E0130 16:16:45.632736 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:45 crc kubenswrapper[4740]: E0130 16:16:45.632791 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:16:49.632769663 +0000 UTC m=+1258.269832262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734391 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734520 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734551 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2kt6\" (UniqueName: \"kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734726 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.734808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.735491 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.736651 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.739638 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.743122 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.743806 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.743855 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.756288 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2kt6\" (UniqueName: \"kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6\") pod \"swift-ring-rebalance-9q76k\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:45 crc kubenswrapper[4740]: I0130 16:16:45.929720 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:16:46 crc kubenswrapper[4740]: I0130 16:16:46.512718 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00","Type":"ContainerStarted","Data":"c335912e6c8a515bf6d7a0922c3bee7b7df4d7b7849a3545ac13ad96a0fcb55c"} Jan 30 16:16:46 crc kubenswrapper[4740]: I0130 16:16:46.514121 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:16:46 crc kubenswrapper[4740]: I0130 16:16:46.532585 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.587481241 podStartE2EDuration="1m6.532561392s" podCreationTimestamp="2026-01-30 16:15:40 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.666692101 +0000 UTC m=+1212.303754700" lastFinishedPulling="2026-01-30 16:16:45.611772252 +0000 UTC m=+1254.248834851" observedRunningTime="2026-01-30 16:16:46.53086707 +0000 UTC m=+1255.167929669" watchObservedRunningTime="2026-01-30 16:16:46.532561392 +0000 UTC m=+1255.169623991" Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.528885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9q76k"] Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.554778 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5972ae70-676c-4eca-a931-92f76fe6efe5","Type":"ContainerStarted","Data":"af780f0cbfdec43b093116157ae39067af4689bd869671416de32c083528c0e9"} Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.557141 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerStarted","Data":"28e7fa72b294cb9f6aa525bccba688eb4e6ac45301bbb9aff083799260d0e886"} Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.559104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"47e7ebfc-24f9-4946-aace-c402546d5a60","Type":"ContainerStarted","Data":"2101984cda314638847a01d8a4313d906dcc34b5f9639821e835d70cb1dea445"} Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.560098 4740 generic.go:334] "Generic (PLEG): container finished" podID="483203e9-89d7-4b67-b0b9-d0bda08469da" containerID="84dcc19e76bb8d8bbae7e8a65df70b8ad7b5e15b80bceadfa923918eac0aca0a" exitCode=0 Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.560129 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"483203e9-89d7-4b67-b0b9-d0bda08469da","Type":"ContainerDied","Data":"84dcc19e76bb8d8bbae7e8a65df70b8ad7b5e15b80bceadfa923918eac0aca0a"} Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.731626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:49 crc kubenswrapper[4740]: E0130 16:16:49.731886 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:49 crc kubenswrapper[4740]: E0130 16:16:49.731926 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:49 crc kubenswrapper[4740]: E0130 16:16:49.732005 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:16:57.731980342 +0000 UTC m=+1266.369042941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:49 crc kubenswrapper[4740]: I0130 16:16:49.837510 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.575308 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"483203e9-89d7-4b67-b0b9-d0bda08469da","Type":"ContainerStarted","Data":"e47afa686a089f2dd96fc41ca0735b034a4e255ce7351b4fb99fc2df5625c1f0"} Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.580788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5972ae70-676c-4eca-a931-92f76fe6efe5","Type":"ContainerStarted","Data":"f6c84dc7e78000cabac2bd8beebf2aed31bdb5dabbf53be0c1bf4b3cd707f1ea"} Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.581263 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.583647 4740 generic.go:334] "Generic (PLEG): container finished" podID="09f1ea51-a4df-41eb-a996-f19303114474" containerID="203c2709c88a79a29910cfa6ed3c3a2cb4779a618507daa9074d8e993b819694" exitCode=0 Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.583722 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09f1ea51-a4df-41eb-a996-f19303114474","Type":"ContainerDied","Data":"203c2709c88a79a29910cfa6ed3c3a2cb4779a618507daa9074d8e993b819694"} Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.585437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9q76k" event={"ID":"445dee53-61e3-43c6-b8a9-278954f963a2","Type":"ContainerStarted","Data":"7611dc117a93875b7828ac2d91dd32045d0311e90769a8ee4a2c87982e4494d4"} Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.604631 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=54.788956057 podStartE2EDuration="1m13.604606176s" podCreationTimestamp="2026-01-30 16:15:37 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.958165718 +0000 UTC m=+1212.595228317" lastFinishedPulling="2026-01-30 16:16:22.773815837 +0000 UTC m=+1231.410878436" observedRunningTime="2026-01-30 16:16:50.597537721 +0000 UTC m=+1259.234600340" watchObservedRunningTime="2026-01-30 16:16:50.604606176 +0000 UTC m=+1259.241668775" Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.622718 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.1286122 podStartE2EDuration="9.622690625s" podCreationTimestamp="2026-01-30 16:16:41 +0000 UTC" firstStartedPulling="2026-01-30 16:16:42.564175811 +0000 UTC m=+1251.201238410" lastFinishedPulling="2026-01-30 16:16:49.058254236 +0000 UTC m=+1257.695316835" observedRunningTime="2026-01-30 16:16:50.622407878 +0000 UTC m=+1259.259470497" watchObservedRunningTime="2026-01-30 16:16:50.622690625 +0000 UTC m=+1259.259753224" Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.757708 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:16:50 crc kubenswrapper[4740]: I0130 16:16:50.976658 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.056116 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.057835 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-5n7s2" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="dnsmasq-dns" containerID="cri-o://67affe99b29168960d2036f2d8a898ccd5ddfd4148af8e4e1375c68447a935e9" gracePeriod=10 Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.599581 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09f1ea51-a4df-41eb-a996-f19303114474","Type":"ContainerStarted","Data":"767061d03e9624215f2df1b632fe5df13329e382162ec823efdcd6105776fe75"} Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.605088 4740 generic.go:334] "Generic (PLEG): container finished" podID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerID="67affe99b29168960d2036f2d8a898ccd5ddfd4148af8e4e1375c68447a935e9" exitCode=0 Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.605112 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5n7s2" event={"ID":"115cf6aa-8302-47ad-80c5-9e05d71cf03b","Type":"ContainerDied","Data":"67affe99b29168960d2036f2d8a898ccd5ddfd4148af8e4e1375c68447a935e9"} Jan 30 16:16:51 crc kubenswrapper[4740]: I0130 16:16:51.635269 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=45.459510417 podStartE2EDuration="1m16.635241983s" podCreationTimestamp="2026-01-30 16:15:35 +0000 UTC" firstStartedPulling="2026-01-30 16:15:52.016260717 +0000 UTC m=+1200.653323316" lastFinishedPulling="2026-01-30 16:16:23.191992283 +0000 UTC m=+1231.829054882" observedRunningTime="2026-01-30 16:16:51.629738387 +0000 UTC m=+1260.266800996" watchObservedRunningTime="2026-01-30 16:16:51.635241983 +0000 UTC m=+1260.272304582" Jan 30 16:16:52 crc kubenswrapper[4740]: I0130 16:16:52.543506 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-66dfd9bb-ln5c7" Jan 30 16:16:52 crc kubenswrapper[4740]: I0130 16:16:52.812804 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-795fd8f8cc-z6wx2" Jan 30 16:16:52 crc kubenswrapper[4740]: I0130 16:16:52.994010 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4" Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.655797 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"47e7ebfc-24f9-4946-aace-c402546d5a60","Type":"ContainerStarted","Data":"75ce241c1ee486e82e6013969742fbda497b295cbad4c9c3d447d83b6728c5ae"} Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.656714 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.661739 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.669924 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerStarted","Data":"c3f0090233d16cc4204de34175ffb7119b6decbd39cf9823e6e681e2f35a2ea9"} Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.709868 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=27.365326111 podStartE2EDuration="1m12.709836238s" podCreationTimestamp="2026-01-30 16:15:41 +0000 UTC" firstStartedPulling="2026-01-30 16:16:03.673073899 +0000 UTC m=+1212.310136498" lastFinishedPulling="2026-01-30 16:16:49.017584026 +0000 UTC m=+1257.654646625" observedRunningTime="2026-01-30 16:16:53.679896664 +0000 UTC m=+1262.316959283" watchObservedRunningTime="2026-01-30 16:16:53.709836238 +0000 UTC m=+1262.346898847" Jan 30 16:16:53 crc kubenswrapper[4740]: I0130 16:16:53.983198 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.029012 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.069565 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.159054 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc\") pod \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.159187 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb\") pod \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.159294 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb\") pod \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.159315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5wjh\" (UniqueName: \"kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh\") pod \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.159450 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config\") pod \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\" (UID: \"115cf6aa-8302-47ad-80c5-9e05d71cf03b\") " Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.167981 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh" (OuterVolumeSpecName: "kube-api-access-b5wjh") pod "115cf6aa-8302-47ad-80c5-9e05d71cf03b" (UID: "115cf6aa-8302-47ad-80c5-9e05d71cf03b"). InnerVolumeSpecName "kube-api-access-b5wjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.212743 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "115cf6aa-8302-47ad-80c5-9e05d71cf03b" (UID: "115cf6aa-8302-47ad-80c5-9e05d71cf03b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.222003 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config" (OuterVolumeSpecName: "config") pod "115cf6aa-8302-47ad-80c5-9e05d71cf03b" (UID: "115cf6aa-8302-47ad-80c5-9e05d71cf03b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.224861 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "115cf6aa-8302-47ad-80c5-9e05d71cf03b" (UID: "115cf6aa-8302-47ad-80c5-9e05d71cf03b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.238748 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "115cf6aa-8302-47ad-80c5-9e05d71cf03b" (UID: "115cf6aa-8302-47ad-80c5-9e05d71cf03b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.262420 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.262713 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.263066 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.263135 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/115cf6aa-8302-47ad-80c5-9e05d71cf03b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.263203 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5wjh\" (UniqueName: \"kubernetes.io/projected/115cf6aa-8302-47ad-80c5-9e05d71cf03b-kube-api-access-b5wjh\") on node \"crc\" DevicePath \"\"" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.682877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-5n7s2" event={"ID":"115cf6aa-8302-47ad-80c5-9e05d71cf03b","Type":"ContainerDied","Data":"6b1e1b9735ce2a149257897734e7757f31ffe1d67145a542971174a3c81c0298"} Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.682908 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-5n7s2" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.682960 4740 scope.go:117] "RemoveContainer" containerID="67affe99b29168960d2036f2d8a898ccd5ddfd4148af8e4e1375c68447a935e9" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.715710 4740 scope.go:117] "RemoveContainer" containerID="0d1cdd41be8ff3a4efdf6fe51af44679cf9bbe1dda3d183bd7b23c41804b2a76" Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.732487 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:54 crc kubenswrapper[4740]: I0130 16:16:54.744950 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-5n7s2"] Jan 30 16:16:55 crc kubenswrapper[4740]: I0130 16:16:55.347257 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" path="/var/lib/kubelet/pods/115cf6aa-8302-47ad-80c5-9e05d71cf03b/volumes" Jan 30 16:16:56 crc kubenswrapper[4740]: I0130 16:16:56.706213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9q76k" event={"ID":"445dee53-61e3-43c6-b8a9-278954f963a2","Type":"ContainerStarted","Data":"02aee1e8b8f0cfd41641ca7b6b363842f82f4da40774c60415bd57b696f54aa1"} Jan 30 16:16:56 crc kubenswrapper[4740]: I0130 16:16:56.732317 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9q76k" podStartSLOduration=7.684262506 podStartE2EDuration="11.732292114s" podCreationTimestamp="2026-01-30 16:16:45 +0000 UTC" firstStartedPulling="2026-01-30 16:16:49.546373394 +0000 UTC m=+1258.183435993" lastFinishedPulling="2026-01-30 16:16:53.594403002 +0000 UTC m=+1262.231465601" observedRunningTime="2026-01-30 16:16:56.731773051 +0000 UTC m=+1265.368835670" watchObservedRunningTime="2026-01-30 16:16:56.732292114 +0000 UTC m=+1265.369354733" Jan 30 16:16:57 crc kubenswrapper[4740]: I0130 16:16:57.271555 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 16:16:57 crc kubenswrapper[4740]: I0130 16:16:57.271993 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 16:16:57 crc kubenswrapper[4740]: I0130 16:16:57.370209 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 16:16:57 crc kubenswrapper[4740]: I0130 16:16:57.741486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:16:57 crc kubenswrapper[4740]: E0130 16:16:57.741750 4740 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 16:16:57 crc kubenswrapper[4740]: E0130 16:16:57.741791 4740 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 16:16:57 crc kubenswrapper[4740]: E0130 16:16:57.741886 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift podName:75ff5548-2e68-494b-b131-2b71eb8c9376 nodeName:}" failed. No retries permitted until 2026-01-30 16:17:13.741854078 +0000 UTC m=+1282.378916677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift") pod "swift-storage-0" (UID: "75ff5548-2e68-494b-b131-2b71eb8c9376") : configmap "swift-ring-files" not found Jan 30 16:16:57 crc kubenswrapper[4740]: I0130 16:16:57.805600 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.487034 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9806-account-create-update-vkjtb"] Jan 30 16:16:58 crc kubenswrapper[4740]: E0130 16:16:58.488301 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="init" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.488325 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="init" Jan 30 16:16:58 crc kubenswrapper[4740]: E0130 16:16:58.488378 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="dnsmasq-dns" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.488389 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="dnsmasq-dns" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.488974 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="115cf6aa-8302-47ad-80c5-9e05d71cf03b" containerName="dnsmasq-dns" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.490454 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.495068 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.515610 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9806-account-create-update-vkjtb"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.559012 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7sbsg"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.561595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.571327 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7sbsg"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.661113 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.661304 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7dw\" (UniqueName: \"kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.661342 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.661453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdxh\" (UniqueName: \"kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.753720 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lz7sh"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.755264 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.763516 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.763678 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7dw\" (UniqueName: \"kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.763704 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.763777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdxh\" (UniqueName: \"kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.764845 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.765064 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.767893 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.768073 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.773122 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lz7sh"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.793994 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdxh\" (UniqueName: \"kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh\") pod \"keystone-db-create-7sbsg\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.801819 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7dw\" (UniqueName: \"kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw\") pod \"keystone-9806-account-create-update-vkjtb\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.823995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.871558 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.871735 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8kf\" (UniqueName: \"kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.891139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7sbsg" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.892570 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-12bd-account-create-update-q7ffq"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.894168 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.900930 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.912484 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-12bd-account-create-update-q7ffq"] Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.959531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.974346 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.974461 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.974523 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8kf\" (UniqueName: \"kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.974587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx6qt\" (UniqueName: \"kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.975641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:58 crc kubenswrapper[4740]: I0130 16:16:58.995089 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8kf\" (UniqueName: \"kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf\") pod \"placement-db-create-lz7sh\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.076512 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lz7sh" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.084074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx6qt\" (UniqueName: \"kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.084270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.086040 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.109930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx6qt\" (UniqueName: \"kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt\") pod \"placement-12bd-account-create-update-q7ffq\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.266151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.500043 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7sbsg"] Jan 30 16:16:59 crc kubenswrapper[4740]: W0130 16:16:59.523472 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab35fc86_fda3_45b5_84cf_f2651169ab1d.slice/crio-790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f WatchSource:0}: Error finding container 790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f: Status 404 returned error can't find the container with id 790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.536949 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9806-account-create-update-vkjtb"] Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.645298 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lz7sh"] Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.746592 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lz7sh" event={"ID":"207e1134-f154-40c3-857f-5d3619c0843f","Type":"ContainerStarted","Data":"ee5c3613f4ee4a88eda95c09bc60c346b919e86da3f809ae8f104b5db3c9fcd9"} Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.751213 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9806-account-create-update-vkjtb" event={"ID":"75656825-bedd-47be-9ae0-fde600c6a745","Type":"ContainerStarted","Data":"da714d3dc18a50eca65e81c1957f7c58fb6ebe517679bf8f97ce2d33cebad971"} Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.761019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7sbsg" event={"ID":"ab35fc86-fda3-45b5-84cf-f2651169ab1d","Type":"ContainerStarted","Data":"790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f"} Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.788184 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-7sbsg" podStartSLOduration=1.7881592 podStartE2EDuration="1.7881592s" podCreationTimestamp="2026-01-30 16:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:16:59.780804497 +0000 UTC m=+1268.417867106" watchObservedRunningTime="2026-01-30 16:16:59.7881592 +0000 UTC m=+1268.425221799" Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.832125 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-12bd-account-create-update-q7ffq"] Jan 30 16:16:59 crc kubenswrapper[4740]: W0130 16:16:59.838478 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1625e274_251a_4381_920f_4633abfc7b93.slice/crio-0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8 WatchSource:0}: Error finding container 0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8: Status 404 returned error can't find the container with id 0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8 Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.907212 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8vhhm" podUID="25c16e6c-3931-4064-bf64-baf0759712a5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:16:59 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:16:59 crc kubenswrapper[4740]: > Jan 30 16:16:59 crc kubenswrapper[4740]: I0130 16:16:59.920984 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.790236 4740 generic.go:334] "Generic (PLEG): container finished" podID="207e1134-f154-40c3-857f-5d3619c0843f" containerID="df962fd43f1590b89be890575c3b7144ecea97a32ec3fedebb17077b5bea0303" exitCode=0 Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.792454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lz7sh" event={"ID":"207e1134-f154-40c3-857f-5d3619c0843f","Type":"ContainerDied","Data":"df962fd43f1590b89be890575c3b7144ecea97a32ec3fedebb17077b5bea0303"} Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.796184 4740 generic.go:334] "Generic (PLEG): container finished" podID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerID="9c5383bc7a9fd9a7eb8cc88dd1a216cd4547e09dff876e8f0bc0bc92048a1f2c" exitCode=0 Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.796342 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerDied","Data":"9c5383bc7a9fd9a7eb8cc88dd1a216cd4547e09dff876e8f0bc0bc92048a1f2c"} Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.804537 4740 generic.go:334] "Generic (PLEG): container finished" podID="1625e274-251a-4381-920f-4633abfc7b93" containerID="7b8716b66b6dd003178ef05026d38158d63a601838db5ba3855fade119c6e359" exitCode=0 Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.805412 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-12bd-account-create-update-q7ffq" event={"ID":"1625e274-251a-4381-920f-4633abfc7b93","Type":"ContainerDied","Data":"7b8716b66b6dd003178ef05026d38158d63a601838db5ba3855fade119c6e359"} Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.805508 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-12bd-account-create-update-q7ffq" event={"ID":"1625e274-251a-4381-920f-4633abfc7b93","Type":"ContainerStarted","Data":"0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8"} Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.813716 4740 generic.go:334] "Generic (PLEG): container finished" podID="75656825-bedd-47be-9ae0-fde600c6a745" containerID="7de0206ebc729eaa887298c8fde824ffbfb8571562022395efd0341da50542a6" exitCode=0 Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.813859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9806-account-create-update-vkjtb" event={"ID":"75656825-bedd-47be-9ae0-fde600c6a745","Type":"ContainerDied","Data":"7de0206ebc729eaa887298c8fde824ffbfb8571562022395efd0341da50542a6"} Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.815479 4740 generic.go:334] "Generic (PLEG): container finished" podID="ab35fc86-fda3-45b5-84cf-f2651169ab1d" containerID="6063fc1dfce776627159f3677a4c28877a1f6aeca439438b683e3c82f1cd71e8" exitCode=0 Jan 30 16:17:00 crc kubenswrapper[4740]: I0130 16:17:00.817225 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7sbsg" event={"ID":"ab35fc86-fda3-45b5-84cf-f2651169ab1d","Type":"ContainerDied","Data":"6063fc1dfce776627159f3677a4c28877a1f6aeca439438b683e3c82f1cd71e8"} Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.827866 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerStarted","Data":"f5726cc466db1ba55b4c5679b88c91afdce24f273892f7a01b8a4fe90f232d59"} Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.829770 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.832771 4740 generic.go:334] "Generic (PLEG): container finished" podID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerID="18830ee869670f7ca9913ca98acb195b4ffa37625511af68394e4945e405be8b" exitCode=0 Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.832957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerDied","Data":"18830ee869670f7ca9913ca98acb195b4ffa37625511af68394e4945e405be8b"} Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.875977 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 16:17:01 crc kubenswrapper[4740]: I0130 16:17:01.876494 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.416107151 podStartE2EDuration="1m27.876464635s" podCreationTimestamp="2026-01-30 16:15:34 +0000 UTC" firstStartedPulling="2026-01-30 16:15:36.695608394 +0000 UTC m=+1185.332670993" lastFinishedPulling="2026-01-30 16:16:23.155965878 +0000 UTC m=+1231.793028477" observedRunningTime="2026-01-30 16:17:01.867746929 +0000 UTC m=+1270.504809528" watchObservedRunningTime="2026-01-30 16:17:01.876464635 +0000 UTC m=+1270.513527264" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.551898 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.561492 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7sbsg" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.566295 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.570973 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lz7sh" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.604859 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx6qt\" (UniqueName: \"kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt\") pod \"1625e274-251a-4381-920f-4633abfc7b93\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605011 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts\") pod \"1625e274-251a-4381-920f-4633abfc7b93\" (UID: \"1625e274-251a-4381-920f-4633abfc7b93\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605086 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjdxh\" (UniqueName: \"kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh\") pod \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605134 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg8kf\" (UniqueName: \"kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf\") pod \"207e1134-f154-40c3-857f-5d3619c0843f\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605168 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts\") pod \"75656825-bedd-47be-9ae0-fde600c6a745\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605204 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts\") pod \"207e1134-f154-40c3-857f-5d3619c0843f\" (UID: \"207e1134-f154-40c3-857f-5d3619c0843f\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts\") pod \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\" (UID: \"ab35fc86-fda3-45b5-84cf-f2651169ab1d\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.605467 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt7dw\" (UniqueName: \"kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw\") pod \"75656825-bedd-47be-9ae0-fde600c6a745\" (UID: \"75656825-bedd-47be-9ae0-fde600c6a745\") " Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.611169 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75656825-bedd-47be-9ae0-fde600c6a745" (UID: "75656825-bedd-47be-9ae0-fde600c6a745"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.611287 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab35fc86-fda3-45b5-84cf-f2651169ab1d" (UID: "ab35fc86-fda3-45b5-84cf-f2651169ab1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.611369 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "207e1134-f154-40c3-857f-5d3619c0843f" (UID: "207e1134-f154-40c3-857f-5d3619c0843f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.611580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw" (OuterVolumeSpecName: "kube-api-access-kt7dw") pod "75656825-bedd-47be-9ae0-fde600c6a745" (UID: "75656825-bedd-47be-9ae0-fde600c6a745"). InnerVolumeSpecName "kube-api-access-kt7dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.611896 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1625e274-251a-4381-920f-4633abfc7b93" (UID: "1625e274-251a-4381-920f-4633abfc7b93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.613132 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf" (OuterVolumeSpecName: "kube-api-access-dg8kf") pod "207e1134-f154-40c3-857f-5d3619c0843f" (UID: "207e1134-f154-40c3-857f-5d3619c0843f"). InnerVolumeSpecName "kube-api-access-dg8kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.615607 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh" (OuterVolumeSpecName: "kube-api-access-xjdxh") pod "ab35fc86-fda3-45b5-84cf-f2651169ab1d" (UID: "ab35fc86-fda3-45b5-84cf-f2651169ab1d"). InnerVolumeSpecName "kube-api-access-xjdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.626747 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt" (OuterVolumeSpecName: "kube-api-access-zx6qt") pod "1625e274-251a-4381-920f-4633abfc7b93" (UID: "1625e274-251a-4381-920f-4633abfc7b93"). InnerVolumeSpecName "kube-api-access-zx6qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707569 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab35fc86-fda3-45b5-84cf-f2651169ab1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707633 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt7dw\" (UniqueName: \"kubernetes.io/projected/75656825-bedd-47be-9ae0-fde600c6a745-kube-api-access-kt7dw\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707649 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx6qt\" (UniqueName: \"kubernetes.io/projected/1625e274-251a-4381-920f-4633abfc7b93-kube-api-access-zx6qt\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707658 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1625e274-251a-4381-920f-4633abfc7b93-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707668 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjdxh\" (UniqueName: \"kubernetes.io/projected/ab35fc86-fda3-45b5-84cf-f2651169ab1d-kube-api-access-xjdxh\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707677 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg8kf\" (UniqueName: \"kubernetes.io/projected/207e1134-f154-40c3-857f-5d3619c0843f-kube-api-access-dg8kf\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707687 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75656825-bedd-47be-9ae0-fde600c6a745-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.707696 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/207e1134-f154-40c3-857f-5d3619c0843f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.867240 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9806-account-create-update-vkjtb" event={"ID":"75656825-bedd-47be-9ae0-fde600c6a745","Type":"ContainerDied","Data":"da714d3dc18a50eca65e81c1957f7c58fb6ebe517679bf8f97ce2d33cebad971"} Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.867295 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da714d3dc18a50eca65e81c1957f7c58fb6ebe517679bf8f97ce2d33cebad971" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.867250 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9806-account-create-update-vkjtb" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.871530 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7sbsg" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.871554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7sbsg" event={"ID":"ab35fc86-fda3-45b5-84cf-f2651169ab1d","Type":"ContainerDied","Data":"790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f"} Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.871638 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790817d4d49baea8349ccbe0fd9297f8f8b268fd3092a643b99d5cbf03e2e39f" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.873541 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lz7sh" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.873522 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lz7sh" event={"ID":"207e1134-f154-40c3-857f-5d3619c0843f","Type":"ContainerDied","Data":"ee5c3613f4ee4a88eda95c09bc60c346b919e86da3f809ae8f104b5db3c9fcd9"} Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.873669 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5c3613f4ee4a88eda95c09bc60c346b919e86da3f809ae8f104b5db3c9fcd9" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.876532 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerStarted","Data":"ed7fcb5e92eb6a6f5bfecb849788ba619d6e0abb4dc66d86842facb2b072d7a0"} Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.877431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.881957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-12bd-account-create-update-q7ffq" event={"ID":"1625e274-251a-4381-920f-4633abfc7b93","Type":"ContainerDied","Data":"0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8"} Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.881998 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acdf9292f86e75821d7123f7f793d2e944ff63789dfca7c1eb91c4b5c7f61a8" Jan 30 16:17:02 crc kubenswrapper[4740]: I0130 16:17:02.882072 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-12bd-account-create-update-q7ffq" Jan 30 16:17:03 crc kubenswrapper[4740]: I0130 16:17:03.612330 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.035158225 podStartE2EDuration="1m29.612302129s" podCreationTimestamp="2026-01-30 16:15:34 +0000 UTC" firstStartedPulling="2026-01-30 16:15:36.907470479 +0000 UTC m=+1185.544533078" lastFinishedPulling="2026-01-30 16:16:24.484614343 +0000 UTC m=+1233.121676982" observedRunningTime="2026-01-30 16:17:02.949868033 +0000 UTC m=+1271.586930632" watchObservedRunningTime="2026-01-30 16:17:03.612302129 +0000 UTC m=+1272.249364728" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.043805 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118083 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-49lwq"] Jan 30 16:17:04 crc kubenswrapper[4740]: E0130 16:17:04.118601 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1625e274-251a-4381-920f-4633abfc7b93" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118619 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1625e274-251a-4381-920f-4633abfc7b93" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: E0130 16:17:04.118630 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="207e1134-f154-40c3-857f-5d3619c0843f" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118636 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="207e1134-f154-40c3-857f-5d3619c0843f" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: E0130 16:17:04.118657 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75656825-bedd-47be-9ae0-fde600c6a745" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118664 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="75656825-bedd-47be-9ae0-fde600c6a745" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: E0130 16:17:04.118674 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab35fc86-fda3-45b5-84cf-f2651169ab1d" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118681 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab35fc86-fda3-45b5-84cf-f2651169ab1d" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118863 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab35fc86-fda3-45b5-84cf-f2651169ab1d" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118878 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1625e274-251a-4381-920f-4633abfc7b93" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118892 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="75656825-bedd-47be-9ae0-fde600c6a745" containerName="mariadb-account-create-update" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.118905 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="207e1134-f154-40c3-857f-5d3619c0843f" containerName="mariadb-database-create" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.119656 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.139607 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-49lwq"] Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.240502 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.240839 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gs4\" (UniqueName: \"kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.248822 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a2dd-account-create-update-vtbbj"] Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.250275 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.262944 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.274683 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a2dd-account-create-update-vtbbj"] Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.343021 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.343084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gs4\" (UniqueName: \"kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.343132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ppr\" (UniqueName: \"kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.343191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.343908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.392317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gs4\" (UniqueName: \"kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4\") pod \"glance-db-create-49lwq\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.442060 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-49lwq" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.444577 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ppr\" (UniqueName: \"kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.444628 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.445342 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.470636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ppr\" (UniqueName: \"kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr\") pod \"glance-a2dd-account-create-update-vtbbj\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.567023 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:04 crc kubenswrapper[4740]: I0130 16:17:04.943343 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8vhhm" podUID="25c16e6c-3931-4064-bf64-baf0759712a5" containerName="ovn-controller" probeResult="failure" output=< Jan 30 16:17:04 crc kubenswrapper[4740]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 16:17:04 crc kubenswrapper[4740]: > Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.001934 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.006972 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a2dd-account-create-update-vtbbj"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.025880 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7wnqc" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.130767 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-49lwq"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.427323 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8vhhm-config-8lt8j"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.428802 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.432039 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.451142 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm-config-8lt8j"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586721 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586767 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.586793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbm4q\" (UniqueName: \"kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689004 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689107 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689203 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbm4q\" (UniqueName: \"kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689427 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689541 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.689597 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.690002 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.691681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.720594 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbm4q\" (UniqueName: \"kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q\") pod \"ovn-controller-8vhhm-config-8lt8j\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.757260 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.845210 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hpdng"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.852748 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.856289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.898006 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hpdng"] Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.901564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.901681 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn75k\" (UniqueName: \"kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.934674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2dd-account-create-update-vtbbj" event={"ID":"eb1a729e-aa92-4658-b088-ec2b17042358","Type":"ContainerStarted","Data":"f37896113c60c056cba4138962f86fffdc199d6d367eb0cfc2aa7fa30e9d1c22"} Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.934722 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2dd-account-create-update-vtbbj" event={"ID":"eb1a729e-aa92-4658-b088-ec2b17042358","Type":"ContainerStarted","Data":"282481dbd58404f9505f2098add7ef4b86160972f3567f5d98b4f652e971a62d"} Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.937757 4740 generic.go:334] "Generic (PLEG): container finished" podID="445dee53-61e3-43c6-b8a9-278954f963a2" containerID="02aee1e8b8f0cfd41641ca7b6b363842f82f4da40774c60415bd57b696f54aa1" exitCode=0 Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.937821 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9q76k" event={"ID":"445dee53-61e3-43c6-b8a9-278954f963a2","Type":"ContainerDied","Data":"02aee1e8b8f0cfd41641ca7b6b363842f82f4da40774c60415bd57b696f54aa1"} Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.940531 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-49lwq" event={"ID":"d11a8409-8872-4bdb-8409-db5350a4b0c4","Type":"ContainerStarted","Data":"23fdb5f0537a9f7470f3195a756b7f65733c6c5442093d71314d99ecfc2ec628"} Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.940572 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-49lwq" event={"ID":"d11a8409-8872-4bdb-8409-db5350a4b0c4","Type":"ContainerStarted","Data":"f2557688a407a1a46f19c0e76be2665734985a1549b619d24c3aef2f9094adfd"} Jan 30 16:17:05 crc kubenswrapper[4740]: I0130 16:17:05.979592 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a2dd-account-create-update-vtbbj" podStartSLOduration=1.97956465 podStartE2EDuration="1.97956465s" podCreationTimestamp="2026-01-30 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:17:05.96184617 +0000 UTC m=+1274.598908769" watchObservedRunningTime="2026-01-30 16:17:05.97956465 +0000 UTC m=+1274.616627239" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.008483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.008646 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn75k\" (UniqueName: \"kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.009371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.021156 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-49lwq" podStartSLOduration=2.021130982 podStartE2EDuration="2.021130982s" podCreationTimestamp="2026-01-30 16:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:17:05.987924718 +0000 UTC m=+1274.624987317" watchObservedRunningTime="2026-01-30 16:17:06.021130982 +0000 UTC m=+1274.658193581" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.088191 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn75k\" (UniqueName: \"kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k\") pod \"root-account-create-update-hpdng\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.201009 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.459037 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm-config-8lt8j"] Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.977511 4740 generic.go:334] "Generic (PLEG): container finished" podID="d11a8409-8872-4bdb-8409-db5350a4b0c4" containerID="23fdb5f0537a9f7470f3195a756b7f65733c6c5442093d71314d99ecfc2ec628" exitCode=0 Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.977563 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-49lwq" event={"ID":"d11a8409-8872-4bdb-8409-db5350a4b0c4","Type":"ContainerDied","Data":"23fdb5f0537a9f7470f3195a756b7f65733c6c5442093d71314d99ecfc2ec628"} Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.981640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-8lt8j" event={"ID":"1c325e0c-ce45-4394-950e-74f73640fc73","Type":"ContainerStarted","Data":"3dab56732d4994df0036ae46dd73dc2fcfa54b435be6071cef2067ffa556f61a"} Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.998196 4740 generic.go:334] "Generic (PLEG): container finished" podID="eb1a729e-aa92-4658-b088-ec2b17042358" containerID="f37896113c60c056cba4138962f86fffdc199d6d367eb0cfc2aa7fa30e9d1c22" exitCode=0 Jan 30 16:17:06 crc kubenswrapper[4740]: I0130 16:17:06.998738 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2dd-account-create-update-vtbbj" event={"ID":"eb1a729e-aa92-4658-b088-ec2b17042358","Type":"ContainerDied","Data":"f37896113c60c056cba4138962f86fffdc199d6d367eb0cfc2aa7fa30e9d1c22"} Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.042151 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hpdng"] Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.463372 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567510 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567607 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2kt6\" (UniqueName: \"kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567674 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567717 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567818 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.567873 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf\") pod \"445dee53-61e3-43c6-b8a9-278954f963a2\" (UID: \"445dee53-61e3-43c6-b8a9-278954f963a2\") " Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.568657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.568727 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.579632 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6" (OuterVolumeSpecName: "kube-api-access-j2kt6") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "kube-api-access-j2kt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.589547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.615556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.621438 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts" (OuterVolumeSpecName: "scripts") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.623642 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "445dee53-61e3-43c6-b8a9-278954f963a2" (UID: "445dee53-61e3-43c6-b8a9-278954f963a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670715 4740 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670758 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670772 4740 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/445dee53-61e3-43c6-b8a9-278954f963a2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670782 4740 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/445dee53-61e3-43c6-b8a9-278954f963a2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670793 4740 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670811 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2kt6\" (UniqueName: \"kubernetes.io/projected/445dee53-61e3-43c6-b8a9-278954f963a2-kube-api-access-j2kt6\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:07 crc kubenswrapper[4740]: I0130 16:17:07.670827 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/445dee53-61e3-43c6-b8a9-278954f963a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.010303 4740 generic.go:334] "Generic (PLEG): container finished" podID="1c325e0c-ce45-4394-950e-74f73640fc73" containerID="3035cc0d3b0b95d327c049822f58a321542e9893a907a300910038c186052482" exitCode=0 Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.010427 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-8lt8j" event={"ID":"1c325e0c-ce45-4394-950e-74f73640fc73","Type":"ContainerDied","Data":"3035cc0d3b0b95d327c049822f58a321542e9893a907a300910038c186052482"} Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.012374 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9q76k" event={"ID":"445dee53-61e3-43c6-b8a9-278954f963a2","Type":"ContainerDied","Data":"7611dc117a93875b7828ac2d91dd32045d0311e90769a8ee4a2c87982e4494d4"} Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.012446 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7611dc117a93875b7828ac2d91dd32045d0311e90769a8ee4a2c87982e4494d4" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.012474 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9q76k" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.014121 4740 generic.go:334] "Generic (PLEG): container finished" podID="f8abe33e-02c0-4927-b025-100752d57e49" containerID="ef40eb91da1e9153003a0ea45570f3a711bb6a5f834982a83efd5f0810b385c2" exitCode=0 Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.014396 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hpdng" event={"ID":"f8abe33e-02c0-4927-b025-100752d57e49","Type":"ContainerDied","Data":"ef40eb91da1e9153003a0ea45570f3a711bb6a5f834982a83efd5f0810b385c2"} Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.014422 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hpdng" event={"ID":"f8abe33e-02c0-4927-b025-100752d57e49","Type":"ContainerStarted","Data":"c832645b8ed782545e74e1405cf304bca2264fbf3a9010b589b4fce517b86074"} Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.423800 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-49lwq" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.498325 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts\") pod \"d11a8409-8872-4bdb-8409-db5350a4b0c4\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.498616 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94gs4\" (UniqueName: \"kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4\") pod \"d11a8409-8872-4bdb-8409-db5350a4b0c4\" (UID: \"d11a8409-8872-4bdb-8409-db5350a4b0c4\") " Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.500968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d11a8409-8872-4bdb-8409-db5350a4b0c4" (UID: "d11a8409-8872-4bdb-8409-db5350a4b0c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.536718 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4" (OuterVolumeSpecName: "kube-api-access-94gs4") pod "d11a8409-8872-4bdb-8409-db5350a4b0c4" (UID: "d11a8409-8872-4bdb-8409-db5350a4b0c4"). InnerVolumeSpecName "kube-api-access-94gs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.601310 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94gs4\" (UniqueName: \"kubernetes.io/projected/d11a8409-8872-4bdb-8409-db5350a4b0c4-kube-api-access-94gs4\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.601381 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11a8409-8872-4bdb-8409-db5350a4b0c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.656847 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.702481 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts\") pod \"eb1a729e-aa92-4658-b088-ec2b17042358\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.702698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ppr\" (UniqueName: \"kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr\") pod \"eb1a729e-aa92-4658-b088-ec2b17042358\" (UID: \"eb1a729e-aa92-4658-b088-ec2b17042358\") " Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.704188 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb1a729e-aa92-4658-b088-ec2b17042358" (UID: "eb1a729e-aa92-4658-b088-ec2b17042358"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.706654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr" (OuterVolumeSpecName: "kube-api-access-94ppr") pod "eb1a729e-aa92-4658-b088-ec2b17042358" (UID: "eb1a729e-aa92-4658-b088-ec2b17042358"). InnerVolumeSpecName "kube-api-access-94ppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.805987 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ppr\" (UniqueName: \"kubernetes.io/projected/eb1a729e-aa92-4658-b088-ec2b17042358-kube-api-access-94ppr\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:08 crc kubenswrapper[4740]: I0130 16:17:08.806051 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb1a729e-aa92-4658-b088-ec2b17042358-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.024938 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a2dd-account-create-update-vtbbj" event={"ID":"eb1a729e-aa92-4658-b088-ec2b17042358","Type":"ContainerDied","Data":"282481dbd58404f9505f2098add7ef4b86160972f3567f5d98b4f652e971a62d"} Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.025052 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282481dbd58404f9505f2098add7ef4b86160972f3567f5d98b4f652e971a62d" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.024958 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a2dd-account-create-update-vtbbj" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.026613 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-49lwq" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.026594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-49lwq" event={"ID":"d11a8409-8872-4bdb-8409-db5350a4b0c4","Type":"ContainerDied","Data":"f2557688a407a1a46f19c0e76be2665734985a1549b619d24c3aef2f9094adfd"} Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.026694 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2557688a407a1a46f19c0e76be2665734985a1549b619d24c3aef2f9094adfd" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.574638 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.617409 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kb9rw"] Jan 30 16:17:09 crc kubenswrapper[4740]: E0130 16:17:09.617899 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445dee53-61e3-43c6-b8a9-278954f963a2" containerName="swift-ring-rebalance" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.617912 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="445dee53-61e3-43c6-b8a9-278954f963a2" containerName="swift-ring-rebalance" Jan 30 16:17:09 crc kubenswrapper[4740]: E0130 16:17:09.617923 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1a729e-aa92-4658-b088-ec2b17042358" containerName="mariadb-account-create-update" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.617929 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1a729e-aa92-4658-b088-ec2b17042358" containerName="mariadb-account-create-update" Jan 30 16:17:09 crc kubenswrapper[4740]: E0130 16:17:09.617943 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11a8409-8872-4bdb-8409-db5350a4b0c4" containerName="mariadb-database-create" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.617950 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11a8409-8872-4bdb-8409-db5350a4b0c4" containerName="mariadb-database-create" Jan 30 16:17:09 crc kubenswrapper[4740]: E0130 16:17:09.617977 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c325e0c-ce45-4394-950e-74f73640fc73" containerName="ovn-config" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.617983 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c325e0c-ce45-4394-950e-74f73640fc73" containerName="ovn-config" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.618149 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="445dee53-61e3-43c6-b8a9-278954f963a2" containerName="swift-ring-rebalance" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.618168 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c325e0c-ce45-4394-950e-74f73640fc73" containerName="ovn-config" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.618178 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1a729e-aa92-4658-b088-ec2b17042358" containerName="mariadb-account-create-update" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.618185 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11a8409-8872-4bdb-8409-db5350a4b0c4" containerName="mariadb-database-create" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.619271 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.620860 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.620900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.620987 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621016 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621045 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbm4q\" (UniqueName: \"kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621146 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts\") pod \"1c325e0c-ce45-4394-950e-74f73640fc73\" (UID: \"1c325e0c-ce45-4394-950e-74f73640fc73\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621373 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run" (OuterVolumeSpecName: "var-run") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621530 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621799 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621817 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.621825 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c325e0c-ce45-4394-950e-74f73640fc73-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.622204 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.622665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts" (OuterVolumeSpecName: "scripts") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.634255 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q" (OuterVolumeSpecName: "kube-api-access-xbm4q") pod "1c325e0c-ce45-4394-950e-74f73640fc73" (UID: "1c325e0c-ce45-4394-950e-74f73640fc73"). InnerVolumeSpecName "kube-api-access-xbm4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.634748 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.634949 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h2bf2" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.636255 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kb9rw"] Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.725044 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.725867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.726112 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.726262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b27z\" (UniqueName: \"kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.726732 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.730567 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c325e0c-ce45-4394-950e-74f73640fc73-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.730678 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbm4q\" (UniqueName: \"kubernetes.io/projected/1c325e0c-ce45-4394-950e-74f73640fc73-kube-api-access-xbm4q\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.740075 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.834255 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts\") pod \"f8abe33e-02c0-4927-b025-100752d57e49\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.834594 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn75k\" (UniqueName: \"kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k\") pod \"f8abe33e-02c0-4927-b025-100752d57e49\" (UID: \"f8abe33e-02c0-4927-b025-100752d57e49\") " Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.834956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.834992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.835080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.835122 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b27z\" (UniqueName: \"kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.835263 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8abe33e-02c0-4927-b025-100752d57e49" (UID: "f8abe33e-02c0-4927-b025-100752d57e49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.845315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.845873 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.863939 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b27z\" (UniqueName: \"kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.867962 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data\") pod \"glance-db-sync-kb9rw\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.888602 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k" (OuterVolumeSpecName: "kube-api-access-gn75k") pod "f8abe33e-02c0-4927-b025-100752d57e49" (UID: "f8abe33e-02c0-4927-b025-100752d57e49"). InnerVolumeSpecName "kube-api-access-gn75k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.937770 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8abe33e-02c0-4927-b025-100752d57e49-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.938180 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn75k\" (UniqueName: \"kubernetes.io/projected/f8abe33e-02c0-4927-b025-100752d57e49-kube-api-access-gn75k\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:09 crc kubenswrapper[4740]: I0130 16:17:09.981033 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8vhhm" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.035235 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kb9rw" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.037703 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hpdng" event={"ID":"f8abe33e-02c0-4927-b025-100752d57e49","Type":"ContainerDied","Data":"c832645b8ed782545e74e1405cf304bca2264fbf3a9010b589b4fce517b86074"} Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.037746 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c832645b8ed782545e74e1405cf304bca2264fbf3a9010b589b4fce517b86074" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.037809 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hpdng" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.046823 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-8lt8j" event={"ID":"1c325e0c-ce45-4394-950e-74f73640fc73","Type":"ContainerDied","Data":"3dab56732d4994df0036ae46dd73dc2fcfa54b435be6071cef2067ffa556f61a"} Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.047134 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dab56732d4994df0036ae46dd73dc2fcfa54b435be6071cef2067ffa556f61a" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.046951 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-8lt8j" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.743018 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8vhhm-config-8lt8j"] Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.757502 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8vhhm-config-8lt8j"] Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.806028 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kb9rw"] Jan 30 16:17:10 crc kubenswrapper[4740]: W0130 16:17:10.831824 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b01ab87_38ce_4839_ac41_038201f727f9.slice/crio-2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1 WatchSource:0}: Error finding container 2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1: Status 404 returned error can't find the container with id 2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1 Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.888135 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8vhhm-config-5vmrt"] Jan 30 16:17:10 crc kubenswrapper[4740]: E0130 16:17:10.889192 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8abe33e-02c0-4927-b025-100752d57e49" containerName="mariadb-account-create-update" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.889220 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8abe33e-02c0-4927-b025-100752d57e49" containerName="mariadb-account-create-update" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.889469 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8abe33e-02c0-4927-b025-100752d57e49" containerName="mariadb-account-create-update" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.890317 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.895774 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 16:17:10 crc kubenswrapper[4740]: I0130 16:17:10.902839 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm-config-5vmrt"] Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.057673 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kb9rw" event={"ID":"7b01ab87-38ce-4839-ac41-038201f727f9","Type":"ContainerStarted","Data":"2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1"} Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074074 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074250 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074645 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7628\" (UniqueName: \"kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.074998 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176366 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176413 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176512 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7628\" (UniqueName: \"kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176814 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.176851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.177240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.177536 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.179045 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.199381 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7628\" (UniqueName: \"kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628\") pod \"ovn-controller-8vhhm-config-5vmrt\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.211462 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.356145 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c325e0c-ce45-4394-950e-74f73640fc73" path="/var/lib/kubelet/pods/1c325e0c-ce45-4394-950e-74f73640fc73/volumes" Jan 30 16:17:11 crc kubenswrapper[4740]: I0130 16:17:11.733665 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8vhhm-config-5vmrt"] Jan 30 16:17:12 crc kubenswrapper[4740]: I0130 16:17:12.076298 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-5vmrt" event={"ID":"17948b37-abe8-44e6-8358-6497438529d3","Type":"ContainerStarted","Data":"6c9335e8cccf06430184225a9f234d49364280b39626ee33e02f33c275328a67"} Jan 30 16:17:12 crc kubenswrapper[4740]: I0130 16:17:12.202578 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hpdng"] Jan 30 16:17:12 crc kubenswrapper[4740]: I0130 16:17:12.215164 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hpdng"] Jan 30 16:17:13 crc kubenswrapper[4740]: I0130 16:17:13.091218 4740 generic.go:334] "Generic (PLEG): container finished" podID="17948b37-abe8-44e6-8358-6497438529d3" containerID="ff8773bcdc98b06479c05b63272fb5a64ab27e6e6a2e3e085e46be38175afd2d" exitCode=0 Jan 30 16:17:13 crc kubenswrapper[4740]: I0130 16:17:13.091320 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-5vmrt" event={"ID":"17948b37-abe8-44e6-8358-6497438529d3","Type":"ContainerDied","Data":"ff8773bcdc98b06479c05b63272fb5a64ab27e6e6a2e3e085e46be38175afd2d"} Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:13.347033 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8abe33e-02c0-4927-b025-100752d57e49" path="/var/lib/kubelet/pods/f8abe33e-02c0-4927-b025-100752d57e49/volumes" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:13.742975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:13.774677 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75ff5548-2e68-494b-b131-2b71eb8c9376-etc-swift\") pod \"swift-storage-0\" (UID: \"75ff5548-2e68-494b-b131-2b71eb8c9376\") " pod="openstack/swift-storage-0" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:13.852789 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.015490 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.501248 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.669817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7628\" (UniqueName: \"kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.669948 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670577 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670668 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts\") pod \"17948b37-abe8-44e6-8358-6497438529d3\" (UID: \"17948b37-abe8-44e6-8358-6497438529d3\") " Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670888 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.670936 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run" (OuterVolumeSpecName: "var-run") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.671557 4740 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.671576 4740 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.671576 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.671603 4740 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/17948b37-abe8-44e6-8358-6497438529d3-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.671927 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts" (OuterVolumeSpecName: "scripts") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.690007 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628" (OuterVolumeSpecName: "kube-api-access-k7628") pod "17948b37-abe8-44e6-8358-6497438529d3" (UID: "17948b37-abe8-44e6-8358-6497438529d3"). InnerVolumeSpecName "kube-api-access-k7628". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.776296 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.776336 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7628\" (UniqueName: \"kubernetes.io/projected/17948b37-abe8-44e6-8358-6497438529d3-kube-api-access-k7628\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:14 crc kubenswrapper[4740]: I0130 16:17:14.776367 4740 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/17948b37-abe8-44e6-8358-6497438529d3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.133925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8vhhm-config-5vmrt" event={"ID":"17948b37-abe8-44e6-8358-6497438529d3","Type":"ContainerDied","Data":"6c9335e8cccf06430184225a9f234d49364280b39626ee33e02f33c275328a67"} Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.134378 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c9335e8cccf06430184225a9f234d49364280b39626ee33e02f33c275328a67" Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.134463 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8vhhm-config-5vmrt" Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.142291 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerStarted","Data":"7d3aa421abc461547b5f4701a7b6cab52be809c1a84dcf81661fdb6c388a9acc"} Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.194644 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.63098349 podStartE2EDuration="1m35.194620216s" podCreationTimestamp="2026-01-30 16:15:40 +0000 UTC" firstStartedPulling="2026-01-30 16:16:02.760663303 +0000 UTC m=+1211.397725902" lastFinishedPulling="2026-01-30 16:17:14.324300029 +0000 UTC m=+1282.961362628" observedRunningTime="2026-01-30 16:17:15.185063428 +0000 UTC m=+1283.822126017" watchObservedRunningTime="2026-01-30 16:17:15.194620216 +0000 UTC m=+1283.831682815" Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.291766 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.582780 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8vhhm-config-5vmrt"] Jan 30 16:17:15 crc kubenswrapper[4740]: I0130 16:17:15.594718 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8vhhm-config-5vmrt"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.104572 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.194146 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"e1ae9d6285b4aaad0c484c1621bd338b85cb9884f37ee77b47c261d2417f526e"} Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.216320 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.477469 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8x9s2"] Jan 30 16:17:16 crc kubenswrapper[4740]: E0130 16:17:16.478009 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17948b37-abe8-44e6-8358-6497438529d3" containerName="ovn-config" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.478030 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="17948b37-abe8-44e6-8358-6497438529d3" containerName="ovn-config" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.478211 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="17948b37-abe8-44e6-8358-6497438529d3" containerName="ovn-config" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.479052 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.498449 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8x9s2"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.643513 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.643641 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7n2v\" (UniqueName: \"kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.726112 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v7ghb"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.727692 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.745822 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.745884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7n2v\" (UniqueName: \"kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.747385 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.747916 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7ghb"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.763218 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-397f-account-create-update-zmhwn"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.778785 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.790706 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7n2v\" (UniqueName: \"kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v\") pod \"barbican-db-create-8x9s2\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.820187 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.832025 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-397f-account-create-update-zmhwn"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.853547 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pss\" (UniqueName: \"kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.853638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.856994 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.897477 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.903798 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-x645n"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.905492 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.957666 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.957797 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.957836 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjhr\" (UniqueName: \"kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.957896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pss\" (UniqueName: \"kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.963488 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-x645n"] Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.959163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:16 crc kubenswrapper[4740]: I0130 16:17:16.990808 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pss\" (UniqueName: \"kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss\") pod \"cinder-db-create-v7ghb\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.002916 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bd86c"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.004665 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.043618 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bd86c"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.055068 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.062078 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.062142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sjhr\" (UniqueName: \"kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.062642 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2xsv\" (UniqueName: \"kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.062790 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.069281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.107363 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9z8ns"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.109036 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.117980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sjhr\" (UniqueName: \"kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr\") pod \"barbican-397f-account-create-update-zmhwn\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.126085 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n88sh" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.126471 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.126676 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.128539 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9z8ns"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.137316 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-3feb-account-create-update-dk4rx"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.140211 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.145401 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.145719 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.171487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.171590 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.171657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpf2h\" (UniqueName: \"kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.171687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2xsv\" (UniqueName: \"kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.172747 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.180065 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-3feb-account-create-update-dk4rx"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.198222 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.221617 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-356a-account-create-update-clwmn"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.229129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2xsv\" (UniqueName: \"kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv\") pod \"cloudkitty-db-create-x645n\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.229291 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.231330 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.233749 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.242517 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-356a-account-create-update-clwmn"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.280383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.280770 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qtt\" (UniqueName: \"kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.280910 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.281051 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7wt\" (UniqueName: \"kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.281148 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpf2h\" (UniqueName: \"kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.281328 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.281507 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.288298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.295932 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ca3f-account-create-update-wd46b"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.297451 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.313589 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.334541 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpf2h\" (UniqueName: \"kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h\") pod \"neutron-db-create-bd86c\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.373409 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.386565 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrcl\" (UniqueName: \"kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.386896 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.386974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.391908 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qtt\" (UniqueName: \"kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.392070 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.393077 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7wt\" (UniqueName: \"kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.394105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.397184 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17948b37-abe8-44e6-8358-6497438529d3" path="/var/lib/kubelet/pods/17948b37-abe8-44e6-8358-6497438529d3/volumes" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.398428 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ca3f-account-create-update-wd46b"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.400321 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.410453 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.410919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.430069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qtt\" (UniqueName: \"kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt\") pod \"keystone-db-sync-9z8ns\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.442035 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7wt\" (UniqueName: \"kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt\") pod \"cloudkitty-3feb-account-create-update-dk4rx\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.487091 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.489746 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-78p2v"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.491474 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.494583 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.497715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.498380 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxzt\" (UniqueName: \"kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.498435 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.498484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrcl\" (UniqueName: \"kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.498727 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.500745 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.528699 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-78p2v"] Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.529963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrcl\" (UniqueName: \"kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl\") pod \"neutron-356a-account-create-update-clwmn\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.601720 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxzt\" (UniqueName: \"kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.601816 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.601882 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps84p\" (UniqueName: \"kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.601931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.602898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.619618 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxzt\" (UniqueName: \"kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt\") pod \"cinder-ca3f-account-create-update-wd46b\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.623612 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.703809 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.704310 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps84p\" (UniqueName: \"kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.705016 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.713814 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.752388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps84p\" (UniqueName: \"kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p\") pod \"root-account-create-update-78p2v\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:17 crc kubenswrapper[4740]: I0130 16:17:17.833069 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:24 crc kubenswrapper[4740]: I0130 16:17:24.018286 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Jan 30 16:17:26 crc kubenswrapper[4740]: I0130 16:17:26.893813 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:26 crc kubenswrapper[4740]: I0130 16:17:26.897976 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:27 crc kubenswrapper[4740]: I0130 16:17:27.346679 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.094914 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-356a-account-create-update-clwmn"] Jan 30 16:17:29 crc kubenswrapper[4740]: W0130 16:17:29.358560 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46b14301_3181_46f9_82ed_2d0ca6a44374.slice/crio-e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b WatchSource:0}: Error finding container e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b: Status 404 returned error can't find the container with id e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.397407 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9z8ns"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.397447 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bd86c"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.397459 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-3feb-account-create-update-dk4rx"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.448799 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"bc88312bccdcc9fa394a9cd928e9fee0cc674e3ba02ece154f383541d9c184aa"} Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.453529 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-356a-account-create-update-clwmn" event={"ID":"cca32814-086a-46da-8e0f-01bce2d6dde1","Type":"ContainerStarted","Data":"4448054716fe919f8af35f711072145a33c76a6828be9b5250bb3adb818cd297"} Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.734673 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-397f-account-create-update-zmhwn"] Jan 30 16:17:29 crc kubenswrapper[4740]: W0130 16:17:29.778365 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3e89928_c4f5_41ca_aea1_131fa654097d.slice/crio-acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75 WatchSource:0}: Error finding container acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75: Status 404 returned error can't find the container with id acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75 Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.824115 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-x645n"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.856457 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v7ghb"] Jan 30 16:17:29 crc kubenswrapper[4740]: W0130 16:17:29.869143 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa8c381a_3987_4702_b366_7ac197e0a1af.slice/crio-c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34 WatchSource:0}: Error finding container c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34: Status 404 returned error can't find the container with id c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34 Jan 30 16:17:29 crc kubenswrapper[4740]: W0130 16:17:29.873099 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21370d38_9663_4ffe_acb4_f009ebf39a66.slice/crio-96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f WatchSource:0}: Error finding container 96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f: Status 404 returned error can't find the container with id 96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.883239 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ca3f-account-create-update-wd46b"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.930261 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-78p2v"] Jan 30 16:17:29 crc kubenswrapper[4740]: I0130 16:17:29.941578 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8x9s2"] Jan 30 16:17:30 crc kubenswrapper[4740]: W0130 16:17:30.031240 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2508d2b_35c8_4f18_bcef_a5a4b6cb046f.slice/crio-41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793 WatchSource:0}: Error finding container 41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793: Status 404 returned error can't find the container with id 41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793 Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.576485 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8x9s2" event={"ID":"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f","Type":"ContainerStarted","Data":"41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.578154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-x645n" event={"ID":"aa8c381a-3987-4702-b366-7ac197e0a1af","Type":"ContainerStarted","Data":"c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.608638 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7ghb" event={"ID":"21370d38-9663-4ffe-acb4-f009ebf39a66","Type":"ContainerStarted","Data":"96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.635909 4740 generic.go:334] "Generic (PLEG): container finished" podID="cca32814-086a-46da-8e0f-01bce2d6dde1" containerID="9e0b57fb064daac15565582f4323ff713888279e3da95984f8c2a379ecd8a760" exitCode=0 Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.636029 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-356a-account-create-update-clwmn" event={"ID":"cca32814-086a-46da-8e0f-01bce2d6dde1","Type":"ContainerDied","Data":"9e0b57fb064daac15565582f4323ff713888279e3da95984f8c2a379ecd8a760"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.651241 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd86c" event={"ID":"7e7c1d41-649f-4a15-aec6-e8e6af5032b7","Type":"ContainerStarted","Data":"8d158e8436777bd801de6e538a077ab9ca6b8092cbe750c9cc28d29f6240198d"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.651311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd86c" event={"ID":"7e7c1d41-649f-4a15-aec6-e8e6af5032b7","Type":"ContainerStarted","Data":"636fb576c1eb295d60530842825d443075e7cbe9e691fb45558eb7abdfdfce11"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.673997 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"ff65a108dd16ce8c597538ab00a6c3820a03a0ab14a68b8e9179a23abd99ce5c"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.674443 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"45e2f7a4471e19895e0fa80410016317338bbe3cbd91f563803a843d8f1bb364"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.695517 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9z8ns" event={"ID":"46b14301-3181-46f9-82ed-2d0ca6a44374","Type":"ContainerStarted","Data":"e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.707833 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-78p2v" event={"ID":"21fe8a82-8128-465d-8187-b0d997c6cd55","Type":"ContainerStarted","Data":"5e6299fe66bef2d472c1563f3d9e6587096929f60c301311e08e1318f98c131e"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.725968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" event={"ID":"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5","Type":"ContainerStarted","Data":"80c12805cd6aa6be8e84c8d96943505c86c74597d73c63c726ad1d65487bae34"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.726027 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" event={"ID":"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5","Type":"ContainerStarted","Data":"77819ffe3788b16e30deccb716a996bc373b2758960edcf4e990e53bf2a301cd"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.731437 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca3f-account-create-update-wd46b" event={"ID":"7d5e433e-38a7-4b0f-b95a-20a0c3229b56","Type":"ContainerStarted","Data":"fba219c5c0d67aacd5b6d71395885cff8f8c9b367afd7bba8bbca983cb25467a"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.735423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kb9rw" event={"ID":"7b01ab87-38ce-4839-ac41-038201f727f9","Type":"ContainerStarted","Data":"54bf3af07185705ec52d7ad900bbd9b5e90a774964d862b109deffe3c3d59962"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.738201 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-bd86c" podStartSLOduration=14.738166647 podStartE2EDuration="14.738166647s" podCreationTimestamp="2026-01-30 16:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:17:30.723868562 +0000 UTC m=+1299.360931161" watchObservedRunningTime="2026-01-30 16:17:30.738166647 +0000 UTC m=+1299.375229246" Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.739051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-397f-account-create-update-zmhwn" event={"ID":"f3e89928-c4f5-41ca-aea1-131fa654097d","Type":"ContainerStarted","Data":"acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75"} Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.763253 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" podStartSLOduration=13.763223559 podStartE2EDuration="13.763223559s" podCreationTimestamp="2026-01-30 16:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:17:30.758916932 +0000 UTC m=+1299.395979531" watchObservedRunningTime="2026-01-30 16:17:30.763223559 +0000 UTC m=+1299.400286158" Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.819570 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kb9rw" podStartSLOduration=4.025196227 podStartE2EDuration="21.819544958s" podCreationTimestamp="2026-01-30 16:17:09 +0000 UTC" firstStartedPulling="2026-01-30 16:17:10.834648813 +0000 UTC m=+1279.471711402" lastFinishedPulling="2026-01-30 16:17:28.628997534 +0000 UTC m=+1297.266060133" observedRunningTime="2026-01-30 16:17:30.792747422 +0000 UTC m=+1299.429810041" watchObservedRunningTime="2026-01-30 16:17:30.819544958 +0000 UTC m=+1299.456607557" Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.825722 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.834295 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="prometheus" containerID="cri-o://28e7fa72b294cb9f6aa525bccba688eb4e6ac45301bbb9aff083799260d0e886" gracePeriod=600 Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.834741 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="thanos-sidecar" containerID="cri-o://7d3aa421abc461547b5f4701a7b6cab52be809c1a84dcf81661fdb6c388a9acc" gracePeriod=600 Jan 30 16:17:30 crc kubenswrapper[4740]: I0130 16:17:30.834821 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="config-reloader" containerID="cri-o://c3f0090233d16cc4204de34175ffb7119b6decbd39cf9823e6e681e2f35a2ea9" gracePeriod=600 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.770594 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"a23e770b883e58af87a1e60ea6da0c8f060ca9f6d7f4298f143860a315e5ab21"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775742 4740 generic.go:334] "Generic (PLEG): container finished" podID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerID="7d3aa421abc461547b5f4701a7b6cab52be809c1a84dcf81661fdb6c388a9acc" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775772 4740 generic.go:334] "Generic (PLEG): container finished" podID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerID="c3f0090233d16cc4204de34175ffb7119b6decbd39cf9823e6e681e2f35a2ea9" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775783 4740 generic.go:334] "Generic (PLEG): container finished" podID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerID="28e7fa72b294cb9f6aa525bccba688eb4e6ac45301bbb9aff083799260d0e886" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775832 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerDied","Data":"7d3aa421abc461547b5f4701a7b6cab52be809c1a84dcf81661fdb6c388a9acc"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775919 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerDied","Data":"c3f0090233d16cc4204de34175ffb7119b6decbd39cf9823e6e681e2f35a2ea9"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.775933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerDied","Data":"28e7fa72b294cb9f6aa525bccba688eb4e6ac45301bbb9aff083799260d0e886"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.777658 4740 generic.go:334] "Generic (PLEG): container finished" podID="f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" containerID="a240866ede110ecb5a652b2c4933209aa1468ec9a06d696db15a4fdc73d6e57d" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.777739 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8x9s2" event={"ID":"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f","Type":"ContainerDied","Data":"a240866ede110ecb5a652b2c4933209aa1468ec9a06d696db15a4fdc73d6e57d"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.780116 4740 generic.go:334] "Generic (PLEG): container finished" podID="21fe8a82-8128-465d-8187-b0d997c6cd55" containerID="3c35bb539c0af7148e74e52c998b23a715ff1848f9f6c817bbb0cf11cc1a3b83" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.780153 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-78p2v" event={"ID":"21fe8a82-8128-465d-8187-b0d997c6cd55","Type":"ContainerDied","Data":"3c35bb539c0af7148e74e52c998b23a715ff1848f9f6c817bbb0cf11cc1a3b83"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.791955 4740 generic.go:334] "Generic (PLEG): container finished" podID="7d5e433e-38a7-4b0f-b95a-20a0c3229b56" containerID="0ee802a35a24582ba03c784b64b19c9689d896531710c700782985d90e411743" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.792061 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca3f-account-create-update-wd46b" event={"ID":"7d5e433e-38a7-4b0f-b95a-20a0c3229b56","Type":"ContainerDied","Data":"0ee802a35a24582ba03c784b64b19c9689d896531710c700782985d90e411743"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.796266 4740 generic.go:334] "Generic (PLEG): container finished" podID="f3e89928-c4f5-41ca-aea1-131fa654097d" containerID="15acd226fef64fcc08f14ab32aa800c2d560714fa1b43eef82e1b9d1d08fd1fd" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.796381 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-397f-account-create-update-zmhwn" event={"ID":"f3e89928-c4f5-41ca-aea1-131fa654097d","Type":"ContainerDied","Data":"15acd226fef64fcc08f14ab32aa800c2d560714fa1b43eef82e1b9d1d08fd1fd"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.809976 4740 generic.go:334] "Generic (PLEG): container finished" podID="aa8c381a-3987-4702-b366-7ac197e0a1af" containerID="e97acd5936a7b9c720194673f8de009e6e07a8a80c52d9b525c8c68a375ff34e" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.810203 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-x645n" event={"ID":"aa8c381a-3987-4702-b366-7ac197e0a1af","Type":"ContainerDied","Data":"e97acd5936a7b9c720194673f8de009e6e07a8a80c52d9b525c8c68a375ff34e"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.823108 4740 generic.go:334] "Generic (PLEG): container finished" podID="21370d38-9663-4ffe-acb4-f009ebf39a66" containerID="d4b77445ada670e04e79ef77c9b25a96f16d88f1decc2990831f888eecf063db" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.823382 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7ghb" event={"ID":"21370d38-9663-4ffe-acb4-f009ebf39a66","Type":"ContainerDied","Data":"d4b77445ada670e04e79ef77c9b25a96f16d88f1decc2990831f888eecf063db"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.837610 4740 generic.go:334] "Generic (PLEG): container finished" podID="b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" containerID="80c12805cd6aa6be8e84c8d96943505c86c74597d73c63c726ad1d65487bae34" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.837724 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" event={"ID":"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5","Type":"ContainerDied","Data":"80c12805cd6aa6be8e84c8d96943505c86c74597d73c63c726ad1d65487bae34"} Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.845164 4740 generic.go:334] "Generic (PLEG): container finished" podID="7e7c1d41-649f-4a15-aec6-e8e6af5032b7" containerID="8d158e8436777bd801de6e538a077ab9ca6b8092cbe750c9cc28d29f6240198d" exitCode=0 Jan 30 16:17:31 crc kubenswrapper[4740]: I0130 16:17:31.846511 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd86c" event={"ID":"7e7c1d41-649f-4a15-aec6-e8e6af5032b7","Type":"ContainerDied","Data":"8d158e8436777bd801de6e538a077ab9ca6b8092cbe750c9cc28d29f6240198d"} Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.015274 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141194 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141735 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141802 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141920 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.141996 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.142021 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.142064 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tj92\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.142138 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.142210 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets\") pod \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\" (UID: \"cfabe06a-6c42-4191-b819-db7e22a9ea6b\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.149534 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.150194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.152870 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.167259 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config" (OuterVolumeSpecName: "config") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.167721 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.173320 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92" (OuterVolumeSpecName: "kube-api-access-6tj92") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "kube-api-access-6tj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.190460 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out" (OuterVolumeSpecName: "config-out") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.191339 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.214665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config" (OuterVolumeSpecName: "web-config") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245185 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245229 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245246 4740 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cfabe06a-6c42-4191-b819-db7e22a9ea6b-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245262 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245276 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tj92\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-kube-api-access-6tj92\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245290 4740 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245303 4740 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cfabe06a-6c42-4191-b819-db7e22a9ea6b-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245320 4740 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cfabe06a-6c42-4191-b819-db7e22a9ea6b-web-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.245331 4740 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cfabe06a-6c42-4191-b819-db7e22a9ea6b-config-out\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.259375 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "cfabe06a-6c42-4191-b819-db7e22a9ea6b" (UID: "cfabe06a-6c42-4191-b819-db7e22a9ea6b"). InnerVolumeSpecName "pvc-69c5e224-a0d2-402e-a748-1966b938b437". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.348073 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") on node \"crc\" " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.374105 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.388931 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.389265 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-69c5e224-a0d2-402e-a748-1966b938b437" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437") on node "crc" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.452574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrcl\" (UniqueName: \"kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl\") pod \"cca32814-086a-46da-8e0f-01bce2d6dde1\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.452764 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts\") pod \"cca32814-086a-46da-8e0f-01bce2d6dde1\" (UID: \"cca32814-086a-46da-8e0f-01bce2d6dde1\") " Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.453423 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.454874 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cca32814-086a-46da-8e0f-01bce2d6dde1" (UID: "cca32814-086a-46da-8e0f-01bce2d6dde1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.457653 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl" (OuterVolumeSpecName: "kube-api-access-pkrcl") pod "cca32814-086a-46da-8e0f-01bce2d6dde1" (UID: "cca32814-086a-46da-8e0f-01bce2d6dde1"). InnerVolumeSpecName "kube-api-access-pkrcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.555722 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrcl\" (UniqueName: \"kubernetes.io/projected/cca32814-086a-46da-8e0f-01bce2d6dde1-kube-api-access-pkrcl\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.555773 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca32814-086a-46da-8e0f-01bce2d6dde1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.876807 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-356a-account-create-update-clwmn" event={"ID":"cca32814-086a-46da-8e0f-01bce2d6dde1","Type":"ContainerDied","Data":"4448054716fe919f8af35f711072145a33c76a6828be9b5250bb3adb818cd297"} Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.877426 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4448054716fe919f8af35f711072145a33c76a6828be9b5250bb3adb818cd297" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.877036 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-356a-account-create-update-clwmn" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.881262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cfabe06a-6c42-4191-b819-db7e22a9ea6b","Type":"ContainerDied","Data":"e6e2999b066a78e5c5eb79fc9771a5329f5810cc795f40d1958c0c433cd5988a"} Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.881372 4740 scope.go:117] "RemoveContainer" containerID="7d3aa421abc461547b5f4701a7b6cab52be809c1a84dcf81661fdb6c388a9acc" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.881563 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:32 crc kubenswrapper[4740]: I0130 16:17:32.997635 4740 scope.go:117] "RemoveContainer" containerID="c3f0090233d16cc4204de34175ffb7119b6decbd39cf9823e6e681e2f35a2ea9" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.010074 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.031908 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.060948 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:33 crc kubenswrapper[4740]: E0130 16:17:33.061599 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="thanos-sidecar" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061624 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="thanos-sidecar" Jan 30 16:17:33 crc kubenswrapper[4740]: E0130 16:17:33.061638 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca32814-086a-46da-8e0f-01bce2d6dde1" containerName="mariadb-account-create-update" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061647 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca32814-086a-46da-8e0f-01bce2d6dde1" containerName="mariadb-account-create-update" Jan 30 16:17:33 crc kubenswrapper[4740]: E0130 16:17:33.061663 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="config-reloader" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061671 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="config-reloader" Jan 30 16:17:33 crc kubenswrapper[4740]: E0130 16:17:33.061683 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="prometheus" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061691 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="prometheus" Jan 30 16:17:33 crc kubenswrapper[4740]: E0130 16:17:33.061710 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="init-config-reloader" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061718 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="init-config-reloader" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061944 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="prometheus" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.061987 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="config-reloader" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.062002 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="thanos-sidecar" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.062023 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca32814-086a-46da-8e0f-01bce2d6dde1" containerName="mariadb-account-create-update" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.064391 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.078708 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.079027 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.079144 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.082558 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.082641 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.082731 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.082902 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.083109 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-64lqk" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.087567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195662 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195737 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b7e2c82-6c33-432f-b94e-ea939065b33c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195780 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195815 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195933 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrr8\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-kube-api-access-qdrr8\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.195957 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.196005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.212182 4740 scope.go:117] "RemoveContainer" containerID="28e7fa72b294cb9f6aa525bccba688eb4e6ac45301bbb9aff083799260d0e886" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303548 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b7e2c82-6c33-432f-b94e-ea939065b33c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303748 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303787 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303840 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303870 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303894 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrr8\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-kube-api-access-qdrr8\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303926 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.303990 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.304080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.304108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.310393 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.310479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.312839 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9b7e2c82-6c33-432f-b94e-ea939065b33c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.331209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.341734 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.342425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrr8\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-kube-api-access-qdrr8\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.374865 4740 scope.go:117] "RemoveContainer" containerID="811e1ab3fcca6e2edcda7c6cfb4d4afa6d235e9405fe2a573c48bb67dd411c09" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.384129 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.408027 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.408083 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da6c24608c93f0a2e0624bdeac2ceeb9f7fcc16b1afc060dd8f4d7936492775d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.431523 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9b7e2c82-6c33-432f-b94e-ea939065b33c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.434415 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.434733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9b7e2c82-6c33-432f-b94e-ea939065b33c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.442558 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.444303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9b7e2c82-6c33-432f-b94e-ea939065b33c-config\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.513952 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" path="/var/lib/kubelet/pods/cfabe06a-6c42-4191-b819-db7e22a9ea6b/volumes" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.526307 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.677329 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.796946 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-69c5e224-a0d2-402e-a748-1966b938b437\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69c5e224-a0d2-402e-a748-1966b938b437\") pod \"prometheus-metric-storage-0\" (UID: \"9b7e2c82-6c33-432f-b94e-ea939065b33c\") " pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.838073 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.851282 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts\") pod \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.851581 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7n2v\" (UniqueName: \"kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v\") pod \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\" (UID: \"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f\") " Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.852329 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" (UID: "f2508d2b-35c8-4f18-bcef-a5a4b6cb046f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.853096 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.857209 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v" (OuterVolumeSpecName: "kube-api-access-b7n2v") pod "f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" (UID: "f2508d2b-35c8-4f18-bcef-a5a4b6cb046f"). InnerVolumeSpecName "kube-api-access-b7n2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.951364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8x9s2" event={"ID":"f2508d2b-35c8-4f18-bcef-a5a4b6cb046f","Type":"ContainerDied","Data":"41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793"} Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.951434 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b881f2ecf200abd8c2754006a1ec5f1644d39075bc86cafe7ea3a1d524f793" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.951386 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8x9s2" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.954417 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-x645n" event={"ID":"aa8c381a-3987-4702-b366-7ac197e0a1af","Type":"ContainerDied","Data":"c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34"} Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.954462 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3588f136466585162beb2ffc6a996576aa0d336a14dd6383645c5d57476ec34" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.954490 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2xsv\" (UniqueName: \"kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv\") pod \"aa8c381a-3987-4702-b366-7ac197e0a1af\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.954542 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts\") pod \"aa8c381a-3987-4702-b366-7ac197e0a1af\" (UID: \"aa8c381a-3987-4702-b366-7ac197e0a1af\") " Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.954606 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-x645n" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.955608 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa8c381a-3987-4702-b366-7ac197e0a1af" (UID: "aa8c381a-3987-4702-b366-7ac197e0a1af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.956261 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa8c381a-3987-4702-b366-7ac197e0a1af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.956283 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7n2v\" (UniqueName: \"kubernetes.io/projected/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f-kube-api-access-b7n2v\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:33 crc kubenswrapper[4740]: I0130 16:17:33.983267 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv" (OuterVolumeSpecName: "kube-api-access-s2xsv") pod "aa8c381a-3987-4702-b366-7ac197e0a1af" (UID: "aa8c381a-3987-4702-b366-7ac197e0a1af"). InnerVolumeSpecName "kube-api-access-s2xsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:34 crc kubenswrapper[4740]: I0130 16:17:34.046236 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:34 crc kubenswrapper[4740]: I0130 16:17:34.058368 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2xsv\" (UniqueName: \"kubernetes.io/projected/aa8c381a-3987-4702-b366-7ac197e0a1af-kube-api-access-s2xsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:34 crc kubenswrapper[4740]: E0130 16:17:34.218722 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2508d2b_35c8_4f18_bcef_a5a4b6cb046f.slice\": RecentStats: unable to find data in memory cache]" Jan 30 16:17:34 crc kubenswrapper[4740]: I0130 16:17:34.680172 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 16:17:34 crc kubenswrapper[4740]: W0130 16:17:34.688002 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7e2c82_6c33_432f_b94e_ea939065b33c.slice/crio-d324a47e2c6ec39ef20ac08ece3c8f8201191686d2b5ff0974dcb473f265ba6c WatchSource:0}: Error finding container d324a47e2c6ec39ef20ac08ece3c8f8201191686d2b5ff0974dcb473f265ba6c: Status 404 returned error can't find the container with id d324a47e2c6ec39ef20ac08ece3c8f8201191686d2b5ff0974dcb473f265ba6c Jan 30 16:17:34 crc kubenswrapper[4740]: I0130 16:17:34.895716 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="cfabe06a-6c42-4191-b819-db7e22a9ea6b" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.114:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:17:34 crc kubenswrapper[4740]: I0130 16:17:34.967030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerStarted","Data":"d324a47e2c6ec39ef20ac08ece3c8f8201191686d2b5ff0974dcb473f265ba6c"} Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.426402 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.432866 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.439437 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.445136 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.451582 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.509572 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.567960 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts\") pod \"f3e89928-c4f5-41ca-aea1-131fa654097d\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts\") pod \"21370d38-9663-4ffe-acb4-f009ebf39a66\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568304 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sjhr\" (UniqueName: \"kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr\") pod \"f3e89928-c4f5-41ca-aea1-131fa654097d\" (UID: \"f3e89928-c4f5-41ca-aea1-131fa654097d\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76pss\" (UniqueName: \"kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss\") pod \"21370d38-9663-4ffe-acb4-f009ebf39a66\" (UID: \"21370d38-9663-4ffe-acb4-f009ebf39a66\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568560 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts\") pod \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts\") pod \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568685 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7wt\" (UniqueName: \"kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt\") pod \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\" (UID: \"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568736 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts\") pod \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568768 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mxzt\" (UniqueName: \"kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt\") pod \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\" (UID: \"7d5e433e-38a7-4b0f-b95a-20a0c3229b56\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.568871 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpf2h\" (UniqueName: \"kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h\") pod \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\" (UID: \"7e7c1d41-649f-4a15-aec6-e8e6af5032b7\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.571471 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e7c1d41-649f-4a15-aec6-e8e6af5032b7" (UID: "7e7c1d41-649f-4a15-aec6-e8e6af5032b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.572824 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3e89928-c4f5-41ca-aea1-131fa654097d" (UID: "f3e89928-c4f5-41ca-aea1-131fa654097d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.583483 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h" (OuterVolumeSpecName: "kube-api-access-tpf2h") pod "7e7c1d41-649f-4a15-aec6-e8e6af5032b7" (UID: "7e7c1d41-649f-4a15-aec6-e8e6af5032b7"). InnerVolumeSpecName "kube-api-access-tpf2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.584452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr" (OuterVolumeSpecName: "kube-api-access-6sjhr") pod "f3e89928-c4f5-41ca-aea1-131fa654097d" (UID: "f3e89928-c4f5-41ca-aea1-131fa654097d"). InnerVolumeSpecName "kube-api-access-6sjhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.585027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d5e433e-38a7-4b0f-b95a-20a0c3229b56" (UID: "7d5e433e-38a7-4b0f-b95a-20a0c3229b56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.585178 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21370d38-9663-4ffe-acb4-f009ebf39a66" (UID: "21370d38-9663-4ffe-acb4-f009ebf39a66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.588046 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" (UID: "b9cb2731-a1e5-444c-aa69-8a6c61e57cd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.590298 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt" (OuterVolumeSpecName: "kube-api-access-9mxzt") pod "7d5e433e-38a7-4b0f-b95a-20a0c3229b56" (UID: "7d5e433e-38a7-4b0f-b95a-20a0c3229b56"). InnerVolumeSpecName "kube-api-access-9mxzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.602671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss" (OuterVolumeSpecName: "kube-api-access-76pss") pod "21370d38-9663-4ffe-acb4-f009ebf39a66" (UID: "21370d38-9663-4ffe-acb4-f009ebf39a66"). InnerVolumeSpecName "kube-api-access-76pss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.603645 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt" (OuterVolumeSpecName: "kube-api-access-wq7wt") pod "b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" (UID: "b9cb2731-a1e5-444c-aa69-8a6c61e57cd5"). InnerVolumeSpecName "kube-api-access-wq7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.672117 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps84p\" (UniqueName: \"kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p\") pod \"21fe8a82-8128-465d-8187-b0d997c6cd55\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.672216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts\") pod \"21fe8a82-8128-465d-8187-b0d997c6cd55\" (UID: \"21fe8a82-8128-465d-8187-b0d997c6cd55\") " Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673072 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21fe8a82-8128-465d-8187-b0d997c6cd55" (UID: "21fe8a82-8128-465d-8187-b0d997c6cd55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673868 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76pss\" (UniqueName: \"kubernetes.io/projected/21370d38-9663-4ffe-acb4-f009ebf39a66-kube-api-access-76pss\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673894 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673909 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673922 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7wt\" (UniqueName: \"kubernetes.io/projected/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5-kube-api-access-wq7wt\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673938 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673950 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mxzt\" (UniqueName: \"kubernetes.io/projected/7d5e433e-38a7-4b0f-b95a-20a0c3229b56-kube-api-access-9mxzt\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673963 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpf2h\" (UniqueName: \"kubernetes.io/projected/7e7c1d41-649f-4a15-aec6-e8e6af5032b7-kube-api-access-tpf2h\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673980 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3e89928-c4f5-41ca-aea1-131fa654097d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.673993 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21370d38-9663-4ffe-acb4-f009ebf39a66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.674005 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sjhr\" (UniqueName: \"kubernetes.io/projected/f3e89928-c4f5-41ca-aea1-131fa654097d-kube-api-access-6sjhr\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.674162 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21fe8a82-8128-465d-8187-b0d997c6cd55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.676547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p" (OuterVolumeSpecName: "kube-api-access-ps84p") pod "21fe8a82-8128-465d-8187-b0d997c6cd55" (UID: "21fe8a82-8128-465d-8187-b0d997c6cd55"). InnerVolumeSpecName "kube-api-access-ps84p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:17:38 crc kubenswrapper[4740]: I0130 16:17:38.776649 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps84p\" (UniqueName: \"kubernetes.io/projected/21fe8a82-8128-465d-8187-b0d997c6cd55-kube-api-access-ps84p\") on node \"crc\" DevicePath \"\"" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.004081 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.004085 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-3feb-account-create-update-dk4rx" event={"ID":"b9cb2731-a1e5-444c-aa69-8a6c61e57cd5","Type":"ContainerDied","Data":"77819ffe3788b16e30deccb716a996bc373b2758960edcf4e990e53bf2a301cd"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.004209 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77819ffe3788b16e30deccb716a996bc373b2758960edcf4e990e53bf2a301cd" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.007948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca3f-account-create-update-wd46b" event={"ID":"7d5e433e-38a7-4b0f-b95a-20a0c3229b56","Type":"ContainerDied","Data":"fba219c5c0d67aacd5b6d71395885cff8f8c9b367afd7bba8bbca983cb25467a"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.007994 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca3f-account-create-update-wd46b" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.007999 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fba219c5c0d67aacd5b6d71395885cff8f8c9b367afd7bba8bbca983cb25467a" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.009690 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-397f-account-create-update-zmhwn" event={"ID":"f3e89928-c4f5-41ca-aea1-131fa654097d","Type":"ContainerDied","Data":"acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.009716 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acec9cc9b643d6e211f3a5ceacb2e30aa4d2b5222dda3a5c22acddf64b4d0d75" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.009739 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-397f-account-create-update-zmhwn" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.010949 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bd86c" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.010949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bd86c" event={"ID":"7e7c1d41-649f-4a15-aec6-e8e6af5032b7","Type":"ContainerDied","Data":"636fb576c1eb295d60530842825d443075e7cbe9e691fb45558eb7abdfdfce11"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.011064 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636fb576c1eb295d60530842825d443075e7cbe9e691fb45558eb7abdfdfce11" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.012134 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-78p2v" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.012122 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-78p2v" event={"ID":"21fe8a82-8128-465d-8187-b0d997c6cd55","Type":"ContainerDied","Data":"5e6299fe66bef2d472c1563f3d9e6587096929f60c301311e08e1318f98c131e"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.012247 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6299fe66bef2d472c1563f3d9e6587096929f60c301311e08e1318f98c131e" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.017502 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v7ghb" event={"ID":"21370d38-9663-4ffe-acb4-f009ebf39a66","Type":"ContainerDied","Data":"96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f"} Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.017530 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ed5f9b14ba3a63e54b3cdf9d254f5e4fc8842e32007fedb0c65a6d8966038f" Jan 30 16:17:39 crc kubenswrapper[4740]: I0130 16:17:39.017611 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v7ghb" Jan 30 16:17:40 crc kubenswrapper[4740]: I0130 16:17:40.031981 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"7d199e1db7848b70b54ae6be34749bd03c2c8be54872b50ac3013c92c0590fe2"} Jan 30 16:17:40 crc kubenswrapper[4740]: I0130 16:17:40.034194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerStarted","Data":"636b190f557a915d1c508342ec1395aecf7619755efeb26e8aac439085bc81b2"} Jan 30 16:17:43 crc kubenswrapper[4740]: I0130 16:17:43.073711 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"3c71c91c3825e22fa1d65031e949761bb01f8466829e6dfb590466c83dac0e47"} Jan 30 16:17:43 crc kubenswrapper[4740]: I0130 16:17:43.074171 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"ae83166b68923a033e7e0d53899df7975ead54bac9bf9935b9f5d328228d2fc3"} Jan 30 16:17:43 crc kubenswrapper[4740]: I0130 16:17:43.075883 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9z8ns" event={"ID":"46b14301-3181-46f9-82ed-2d0ca6a44374","Type":"ContainerStarted","Data":"5728efea581ad5a415ac5bede63d154a9b94eb36868da1713c0bee6aa88e106f"} Jan 30 16:17:43 crc kubenswrapper[4740]: I0130 16:17:43.102198 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9z8ns" podStartSLOduration=13.041726566 podStartE2EDuration="26.102175769s" podCreationTimestamp="2026-01-30 16:17:17 +0000 UTC" firstStartedPulling="2026-01-30 16:17:29.41270216 +0000 UTC m=+1298.049764759" lastFinishedPulling="2026-01-30 16:17:42.473151363 +0000 UTC m=+1311.110213962" observedRunningTime="2026-01-30 16:17:43.093464023 +0000 UTC m=+1311.730526642" watchObservedRunningTime="2026-01-30 16:17:43.102175769 +0000 UTC m=+1311.739238368" Jan 30 16:17:44 crc kubenswrapper[4740]: I0130 16:17:44.094707 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"69559d53e16b70bdebfb77b74e8bddbcd4560cf199d082533f0be5f768e22fbf"} Jan 30 16:17:48 crc kubenswrapper[4740]: I0130 16:17:48.147820 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"324123978cbe9fb4ad1c3fe7bfb842883d7662f7a5de8206050bb279b4f383c3"} Jan 30 16:17:48 crc kubenswrapper[4740]: I0130 16:17:48.148861 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"b6331915593b5419657c3f69b077ad460987b4664dcd3afbe20ce9e4df1e81a4"} Jan 30 16:17:48 crc kubenswrapper[4740]: I0130 16:17:48.161173 4740 generic.go:334] "Generic (PLEG): container finished" podID="9b7e2c82-6c33-432f-b94e-ea939065b33c" containerID="636b190f557a915d1c508342ec1395aecf7619755efeb26e8aac439085bc81b2" exitCode=0 Jan 30 16:17:48 crc kubenswrapper[4740]: I0130 16:17:48.161230 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerDied","Data":"636b190f557a915d1c508342ec1395aecf7619755efeb26e8aac439085bc81b2"} Jan 30 16:17:50 crc kubenswrapper[4740]: I0130 16:17:50.187494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"3434f7d376faede81c28a47701dd828d08bc202aa3ec2197f8e9748ea323dce9"} Jan 30 16:17:50 crc kubenswrapper[4740]: I0130 16:17:50.190842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerStarted","Data":"438621e588452c4e7a6856823536e39ab5efe3d6a16ebf42aa7ad727ecd40b97"} Jan 30 16:17:53 crc kubenswrapper[4740]: I0130 16:17:53.232118 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"07e46226d90cb22bcba5f15e8d9db8d3385c8762a96f2eb951e8375076f90fec"} Jan 30 16:17:54 crc kubenswrapper[4740]: I0130 16:17:54.247109 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"ff9b28459a62b3aeb026751cec90717394a609a9460d7ef59afd40962d61cc54"} Jan 30 16:17:57 crc kubenswrapper[4740]: I0130 16:17:57.279653 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerStarted","Data":"05426208a9ebddc113e3c8c1e92fb1d53a74dd8b58a161428e002613d305a661"} Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.296121 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"e1f9301dfec44a2888b0a663e78130ad8c6ac8cc458a4c4ecd01909f9d221b70"} Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.296604 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"75ff5548-2e68-494b-b131-2b71eb8c9376","Type":"ContainerStarted","Data":"46cda8ed0a95c62cbbb43d973ac6d11fd0b3469efe3ccbdaffec4962145d5ea1"} Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.298859 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9b7e2c82-6c33-432f-b94e-ea939065b33c","Type":"ContainerStarted","Data":"62e392fefe905e8e5a4c0b446928076baa3f0e159f5f0386d587e55dda0bf3d9"} Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.379931 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=46.463651894 podStartE2EDuration="1m18.379896409s" podCreationTimestamp="2026-01-30 16:16:40 +0000 UTC" firstStartedPulling="2026-01-30 16:17:15.300944355 +0000 UTC m=+1283.938006954" lastFinishedPulling="2026-01-30 16:17:47.21718887 +0000 UTC m=+1315.854251469" observedRunningTime="2026-01-30 16:17:58.345508065 +0000 UTC m=+1326.982570664" watchObservedRunningTime="2026-01-30 16:17:58.379896409 +0000 UTC m=+1327.016959008" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.393970 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.393943268 podStartE2EDuration="25.393943268s" podCreationTimestamp="2026-01-30 16:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:17:58.389864997 +0000 UTC m=+1327.026927596" watchObservedRunningTime="2026-01-30 16:17:58.393943268 +0000 UTC m=+1327.031005867" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.678926 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679528 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679554 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679588 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679597 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679611 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7c1d41-649f-4a15-aec6-e8e6af5032b7" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679621 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7c1d41-649f-4a15-aec6-e8e6af5032b7" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679638 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21370d38-9663-4ffe-acb4-f009ebf39a66" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679646 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="21370d38-9663-4ffe-acb4-f009ebf39a66" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679657 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8c381a-3987-4702-b366-7ac197e0a1af" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679665 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8c381a-3987-4702-b366-7ac197e0a1af" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679683 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fe8a82-8128-465d-8187-b0d997c6cd55" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679691 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fe8a82-8128-465d-8187-b0d997c6cd55" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679714 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5e433e-38a7-4b0f-b95a-20a0c3229b56" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679722 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5e433e-38a7-4b0f-b95a-20a0c3229b56" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: E0130 16:17:58.679732 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e89928-c4f5-41ca-aea1-131fa654097d" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679740 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e89928-c4f5-41ca-aea1-131fa654097d" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679956 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e89928-c4f5-41ca-aea1-131fa654097d" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679978 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5e433e-38a7-4b0f-b95a-20a0c3229b56" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.679991 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.680007 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="21370d38-9663-4ffe-acb4-f009ebf39a66" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.680025 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8c381a-3987-4702-b366-7ac197e0a1af" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.680040 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7c1d41-649f-4a15-aec6-e8e6af5032b7" containerName="mariadb-database-create" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.680050 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.680060 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fe8a82-8128-465d-8187-b0d997c6cd55" containerName="mariadb-account-create-update" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.681579 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.683424 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.696715 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.856319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7x6n\" (UniqueName: \"kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.856564 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.856692 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.856904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.857025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.857076 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.959513 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7x6n\" (UniqueName: \"kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.960081 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.960125 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.960174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.960201 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.960229 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.961395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.961516 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.961537 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.961542 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.962024 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:58 crc kubenswrapper[4740]: I0130 16:17:58.981149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7x6n\" (UniqueName: \"kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n\") pod \"dnsmasq-dns-5c79d794d7-hwbwt\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:59 crc kubenswrapper[4740]: I0130 16:17:59.009789 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:17:59 crc kubenswrapper[4740]: I0130 16:17:59.046746 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 16:17:59 crc kubenswrapper[4740]: I0130 16:17:59.489824 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:17:59 crc kubenswrapper[4740]: W0130 16:17:59.502429 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5163ee6_5fe7_40e5_9912_992b75276183.slice/crio-8e9b986df9dae5593dce5124312b0fbe1a29d9e131eab312657758565eb13f90 WatchSource:0}: Error finding container 8e9b986df9dae5593dce5124312b0fbe1a29d9e131eab312657758565eb13f90: Status 404 returned error can't find the container with id 8e9b986df9dae5593dce5124312b0fbe1a29d9e131eab312657758565eb13f90 Jan 30 16:18:00 crc kubenswrapper[4740]: I0130 16:18:00.320573 4740 generic.go:334] "Generic (PLEG): container finished" podID="b5163ee6-5fe7-40e5-9912-992b75276183" containerID="5092f1bb4975663aa79a1054efab62e555bc4d0b6d04832ff6bcf9dd41d8dab1" exitCode=0 Jan 30 16:18:00 crc kubenswrapper[4740]: I0130 16:18:00.320670 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" event={"ID":"b5163ee6-5fe7-40e5-9912-992b75276183","Type":"ContainerDied","Data":"5092f1bb4975663aa79a1054efab62e555bc4d0b6d04832ff6bcf9dd41d8dab1"} Jan 30 16:18:00 crc kubenswrapper[4740]: I0130 16:18:00.321053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" event={"ID":"b5163ee6-5fe7-40e5-9912-992b75276183","Type":"ContainerStarted","Data":"8e9b986df9dae5593dce5124312b0fbe1a29d9e131eab312657758565eb13f90"} Jan 30 16:18:01 crc kubenswrapper[4740]: I0130 16:18:01.340315 4740 generic.go:334] "Generic (PLEG): container finished" podID="46b14301-3181-46f9-82ed-2d0ca6a44374" containerID="5728efea581ad5a415ac5bede63d154a9b94eb36868da1713c0bee6aa88e106f" exitCode=0 Jan 30 16:18:01 crc kubenswrapper[4740]: I0130 16:18:01.346940 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:18:01 crc kubenswrapper[4740]: I0130 16:18:01.346983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" event={"ID":"b5163ee6-5fe7-40e5-9912-992b75276183","Type":"ContainerStarted","Data":"9d126e6c3d21ec363f174d65f5f83e020a6b06c2a3f68fc511d829428255c5aa"} Jan 30 16:18:01 crc kubenswrapper[4740]: I0130 16:18:01.347003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9z8ns" event={"ID":"46b14301-3181-46f9-82ed-2d0ca6a44374","Type":"ContainerDied","Data":"5728efea581ad5a415ac5bede63d154a9b94eb36868da1713c0bee6aa88e106f"} Jan 30 16:18:01 crc kubenswrapper[4740]: I0130 16:18:01.371760 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" podStartSLOduration=3.371736526 podStartE2EDuration="3.371736526s" podCreationTimestamp="2026-01-30 16:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:01.362707232 +0000 UTC m=+1329.999769831" watchObservedRunningTime="2026-01-30 16:18:01.371736526 +0000 UTC m=+1330.008799125" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.358749 4740 generic.go:334] "Generic (PLEG): container finished" podID="7b01ab87-38ce-4839-ac41-038201f727f9" containerID="54bf3af07185705ec52d7ad900bbd9b5e90a774964d862b109deffe3c3d59962" exitCode=0 Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.358904 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kb9rw" event={"ID":"7b01ab87-38ce-4839-ac41-038201f727f9","Type":"ContainerDied","Data":"54bf3af07185705ec52d7ad900bbd9b5e90a774964d862b109deffe3c3d59962"} Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.712317 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.852793 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle\") pod \"46b14301-3181-46f9-82ed-2d0ca6a44374\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.853141 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data\") pod \"46b14301-3181-46f9-82ed-2d0ca6a44374\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.853215 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qtt\" (UniqueName: \"kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt\") pod \"46b14301-3181-46f9-82ed-2d0ca6a44374\" (UID: \"46b14301-3181-46f9-82ed-2d0ca6a44374\") " Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.860968 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt" (OuterVolumeSpecName: "kube-api-access-d7qtt") pod "46b14301-3181-46f9-82ed-2d0ca6a44374" (UID: "46b14301-3181-46f9-82ed-2d0ca6a44374"). InnerVolumeSpecName "kube-api-access-d7qtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.894858 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46b14301-3181-46f9-82ed-2d0ca6a44374" (UID: "46b14301-3181-46f9-82ed-2d0ca6a44374"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.911792 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data" (OuterVolumeSpecName: "config-data") pod "46b14301-3181-46f9-82ed-2d0ca6a44374" (UID: "46b14301-3181-46f9-82ed-2d0ca6a44374"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.957211 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.957259 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qtt\" (UniqueName: \"kubernetes.io/projected/46b14301-3181-46f9-82ed-2d0ca6a44374-kube-api-access-d7qtt\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:02 crc kubenswrapper[4740]: I0130 16:18:02.957278 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46b14301-3181-46f9-82ed-2d0ca6a44374-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.370876 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9z8ns" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.370872 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9z8ns" event={"ID":"46b14301-3181-46f9-82ed-2d0ca6a44374","Type":"ContainerDied","Data":"e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b"} Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.372778 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e585c23a3859d69d7e401cfad51f86aaa01950d6de6d347594fc3f3d61acce0b" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.661398 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.661651 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="dnsmasq-dns" containerID="cri-o://9d126e6c3d21ec363f174d65f5f83e020a6b06c2a3f68fc511d829428255c5aa" gracePeriod=10 Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.698541 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x8x4q"] Jan 30 16:18:03 crc kubenswrapper[4740]: E0130 16:18:03.699050 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b14301-3181-46f9-82ed-2d0ca6a44374" containerName="keystone-db-sync" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.699071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b14301-3181-46f9-82ed-2d0ca6a44374" containerName="keystone-db-sync" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.699263 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b14301-3181-46f9-82ed-2d0ca6a44374" containerName="keystone-db-sync" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.700042 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.706334 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.706549 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n88sh" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.706615 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.706764 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.706838 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.722681 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8x4q"] Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.759214 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.780477 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.780551 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.793554 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.793827 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmrp\" (UniqueName: \"kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.794453 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.794625 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.795205 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.780680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.900696 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2m6\" (UniqueName: \"kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.900832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.900906 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901274 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901390 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901508 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qmrp\" (UniqueName: \"kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901734 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901806 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.901913 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.927107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.927306 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.927406 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.928174 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.931009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.944127 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qmrp\" (UniqueName: \"kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp\") pod \"keystone-bootstrap-x8x4q\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.997659 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-lfz95"] Jan 30 16:18:03 crc kubenswrapper[4740]: I0130 16:18:03.999646 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003707 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003766 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003854 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003887 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2m6\" (UniqueName: \"kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.003983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.004943 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.005725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.011407 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.011414 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-54bf9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.011609 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.011793 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.017428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.017861 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.018769 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.035444 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kb9rw" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.035617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.050813 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vhkg4"] Jan 30 16:18:04 crc kubenswrapper[4740]: E0130 16:18:04.051387 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b01ab87-38ce-4839-ac41-038201f727f9" containerName="glance-db-sync" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.051416 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b01ab87-38ce-4839-ac41-038201f727f9" containerName="glance-db-sync" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.051647 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b01ab87-38ce-4839-ac41-038201f727f9" containerName="glance-db-sync" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.053120 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.054163 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.065168 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2m6\" (UniqueName: \"kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6\") pod \"dnsmasq-dns-5b868669f-vrmdv\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.075535 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rm2xc" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.076928 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.086758 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.088880 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vhkg4"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.095708 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.105944 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b27z\" (UniqueName: \"kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z\") pod \"7b01ab87-38ce-4839-ac41-038201f727f9\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.106053 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data\") pod \"7b01ab87-38ce-4839-ac41-038201f727f9\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.106227 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data\") pod \"7b01ab87-38ce-4839-ac41-038201f727f9\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.106442 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle\") pod \"7b01ab87-38ce-4839-ac41-038201f727f9\" (UID: \"7b01ab87-38ce-4839-ac41-038201f727f9\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.107007 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.107121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.107439 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.115683 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpt7\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.115806 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.116067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.116176 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.116427 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8j4\" (UniqueName: \"kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.125671 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z" (OuterVolumeSpecName: "kube-api-access-6b27z") pod "7b01ab87-38ce-4839-ac41-038201f727f9" (UID: "7b01ab87-38ce-4839-ac41-038201f727f9"). InnerVolumeSpecName "kube-api-access-6b27z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.162332 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-lfz95"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.185239 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7b01ab87-38ce-4839-ac41-038201f727f9" (UID: "7b01ab87-38ce-4839-ac41-038201f727f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.199237 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.228672 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8j4\" (UniqueName: \"kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.228767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.228810 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.236665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.236784 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwpt7\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.236842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.237012 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.237093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.237408 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b27z\" (UniqueName: \"kubernetes.io/projected/7b01ab87-38ce-4839-ac41-038201f727f9-kube-api-access-6b27z\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.237474 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.250943 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pkfjm"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.259366 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.262318 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.279655 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.297947 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.303003 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.304038 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kn5jt" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.309340 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b01ab87-38ce-4839-ac41-038201f727f9" (UID: "7b01ab87-38ce-4839-ac41-038201f727f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.317685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwpt7\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.318848 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.318948 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8j4\" (UniqueName: \"kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.320395 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.321718 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle\") pod \"neutron-db-sync-vhkg4\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.335107 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs\") pod \"cloudkitty-db-sync-lfz95\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346314 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346468 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx24p\" (UniqueName: \"kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346550 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346599 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346724 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.346779 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.349984 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.424032 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pkfjm"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.442956 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data" (OuterVolumeSpecName: "config-data") pod "7b01ab87-38ce-4839-ac41-038201f727f9" (UID: "7b01ab87-38ce-4839-ac41-038201f727f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454566 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx24p\" (UniqueName: \"kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454675 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454725 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454820 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454855 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.454974 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.455059 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b01ab87-38ce-4839-ac41-038201f727f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.458165 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.461980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.462331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kb9rw" event={"ID":"7b01ab87-38ce-4839-ac41-038201f727f9","Type":"ContainerDied","Data":"2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1"} Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.462385 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea1c175589bc19944f9d5be23959c4167edea72d09fd8c1bd42562edd60fae1" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.462546 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kb9rw" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.477591 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.477683 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.481514 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.483522 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.488281 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.488284 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.488579 4740 generic.go:334] "Generic (PLEG): container finished" podID="b5163ee6-5fe7-40e5-9912-992b75276183" containerID="9d126e6c3d21ec363f174d65f5f83e020a6b06c2a3f68fc511d829428255c5aa" exitCode=0 Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.489212 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" event={"ID":"b5163ee6-5fe7-40e5-9912-992b75276183","Type":"ContainerDied","Data":"9d126e6c3d21ec363f174d65f5f83e020a6b06c2a3f68fc511d829428255c5aa"} Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.488625 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx24p\" (UniqueName: \"kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.490847 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data\") pod \"cinder-db-sync-pkfjm\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.499749 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.529683 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2k9q9"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.534438 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.536226 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.537606 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cr9bj" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.548601 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.552241 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.581804 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2k9q9"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.631224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.649311 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.667200 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.668481 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.668535 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.668724 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7lq\" (UniqueName: \"kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.668786 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkq7\" (UniqueName: \"kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.669470 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.673897 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.674004 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.674175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.674204 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.674259 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.683603 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.685558 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.709386 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.730683 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ltwk6"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.734709 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.740805 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.741135 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cs2pd" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.742942 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.743080 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ltwk6"] Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.765403 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.776854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7x6n\" (UniqueName: \"kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777102 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777211 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0\") pod \"b5163ee6-5fe7-40e5-9912-992b75276183\" (UID: \"b5163ee6-5fe7-40e5-9912-992b75276183\") " Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777722 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777754 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777781 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777820 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777841 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777868 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777904 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg64\" (UniqueName: \"kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777930 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777949 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.777994 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778034 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778060 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778129 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7lq\" (UniqueName: \"kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778156 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mkq7\" (UniqueName: \"kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778183 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778225 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.778249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894tt\" (UniqueName: \"kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.780208 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.796067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.804229 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.817383 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7lq\" (UniqueName: \"kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.835417 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data\") pod \"barbican-db-sync-2k9q9\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.841805 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n" (OuterVolumeSpecName: "kube-api-access-h7x6n") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "kube-api-access-h7x6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.847239 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.847956 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.857330 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.866919 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mkq7\" (UniqueName: \"kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.868873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.891733 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892321 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892456 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892712 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892856 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894tt\" (UniqueName: \"kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.892979 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.893032 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.894650 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " pod="openstack/ceilometer-0" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.898286 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.898789 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.899336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.902138 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.904240 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.910978 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.927621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg64\" (UniqueName: \"kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:04 crc kubenswrapper[4740]: I0130 16:18:04.927845 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7x6n\" (UniqueName: \"kubernetes.io/projected/b5163ee6-5fe7-40e5-9912-992b75276183-kube-api-access-h7x6n\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.009429 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894tt\" (UniqueName: \"kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.019070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.023504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.062579 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts\") pod \"placement-db-sync-ltwk6\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.101453 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ltwk6" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.144314 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.162906 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg64\" (UniqueName: \"kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64\") pod \"dnsmasq-dns-cf78879c9-svzr5\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.232409 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.233685 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.239720 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config" (OuterVolumeSpecName: "config") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.245893 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:18:05 crc kubenswrapper[4740]: E0130 16:18:05.246462 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="init" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.246475 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="init" Jan 30 16:18:05 crc kubenswrapper[4740]: E0130 16:18:05.246500 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="dnsmasq-dns" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.246507 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="dnsmasq-dns" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.246709 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" containerName="dnsmasq-dns" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.247878 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.241997 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.309506 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.310075 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.310550 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.369457 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.441093 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.442925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.443111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7rd\" (UniqueName: \"kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.443321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.443532 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.461187 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b5163ee6-5fe7-40e5-9912-992b75276183" (UID: "b5163ee6-5fe7-40e5-9912-992b75276183"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.461983 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.463987 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.464109 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.464132 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.464143 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5163ee6-5fe7-40e5-9912-992b75276183-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.603037 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8x4q"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.624916 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.624992 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.625044 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7rd\" (UniqueName: \"kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.625065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.625144 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.625272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.628697 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.631288 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.634277 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.634421 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.635334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.665851 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7rd\" (UniqueName: \"kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd\") pod \"dnsmasq-dns-56df8fb6b7-p77nv\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.725889 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.738128 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-hwbwt" event={"ID":"b5163ee6-5fe7-40e5-9912-992b75276183","Type":"ContainerDied","Data":"8e9b986df9dae5593dce5124312b0fbe1a29d9e131eab312657758565eb13f90"} Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.738260 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.738296 4740 scope.go:117] "RemoveContainer" containerID="9d126e6c3d21ec363f174d65f5f83e020a6b06c2a3f68fc511d829428255c5aa" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.748067 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vhkg4"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.773009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8x4q" event={"ID":"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26","Type":"ContainerStarted","Data":"b4a1e7ca229ab05b2aca935112e42c8e907d5ee11134daef325fd33fc378bbdc"} Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.782415 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.784481 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.857490 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-hwbwt"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.923657 4740 scope.go:117] "RemoveContainer" containerID="5092f1bb4975663aa79a1054efab62e555bc4d0b6d04832ff6bcf9dd41d8dab1" Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.932743 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-lfz95"] Jan 30 16:18:05 crc kubenswrapper[4740]: I0130 16:18:05.959905 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pkfjm"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.025988 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.029496 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.036848 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.037084 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h2bf2" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.040772 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.101416 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: E0130 16:18:06.127386 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5163ee6_5fe7_40e5_9912_992b75276183.slice\": RecentStats: unable to find data in memory cache]" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.149548 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.149589 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.149624 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.149744 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd27b\" (UniqueName: \"kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.149782 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.150039 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.150064 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.204224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2k9q9"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.252622 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253037 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253139 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253190 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253234 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd27b\" (UniqueName: \"kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.253253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.257134 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.263315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.274643 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.281226 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.281271 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7b0e4f5e4f3aeb77e4c3eeab8492d1ed4d740072e82a0a742970a29e35f2749/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.292182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.299027 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.303075 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.304535 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.307158 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.325122 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.335511 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.341003 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd27b\" (UniqueName: \"kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqn5\" (UniqueName: \"kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355391 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355494 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355522 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.355670 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: E0130 16:18:06.357438 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="04e1512f-f234-4bd3-8139-8affc693b3d6" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.454066 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ltwk6"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459280 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459335 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqn5\" (UniqueName: \"kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459422 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459484 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459543 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.459591 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.460499 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.461404 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.464986 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: E0130 16:18:06.466304 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data glance kube-api-access-hvqn5 scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="874bf537-5ab4-4254-97e4-294440aa41ff" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.484492 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.487443 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.487509 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fa5ce19fcc327994c70afc2a90d04a285bdf3051c4029a47293957e337f4a5/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.487887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.499640 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.509128 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqn5\" (UniqueName: \"kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.606940 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.690194 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.696263 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.775371 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.832377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pkfjm" event={"ID":"2754b498-304b-47aa-a2d3-71a9c2f70e8e","Type":"ContainerStarted","Data":"4003629e0a0e9cad413ce410852bfb84947413140ecbb55ea4655ea2ca304ef3"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.838113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ltwk6" event={"ID":"e68ec665-a90a-4332-8e78-79f658776815","Type":"ContainerStarted","Data":"39fd0f4a8564ec517802c22737e8e4421c8cf2ea83cac203f5890f8ae73e35e0"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.838266 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.850288 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k9q9" event={"ID":"cc1c912a-97a6-4de7-ad45-ced02c0f40e5","Type":"ContainerStarted","Data":"6fabf4cb37495f27c7a01306b6e2ca694cc4a37f89a3e0ad0665f16e2a89c249"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.865986 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lfz95" event={"ID":"29755348-1e90-4436-8a60-a2823c2804fd","Type":"ContainerStarted","Data":"000b796aaa5b9161d256e3bdc502850a08f9fb81d7619ffaa0b2fe88748233d5"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.870391 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" event={"ID":"169e80d0-763e-400e-9369-bd048b982484","Type":"ContainerStarted","Data":"0e90422b74f3780b4c4903b7b89150267a30b429417d864d19dea975681d8275"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.873507 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" event={"ID":"076ac73c-b165-4f36-86d0-9c1765872335","Type":"ContainerStarted","Data":"751093be277014c2d719d82d4d94582de135957a3acce49943bca4b9accbbb8a"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.893544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8x4q" event={"ID":"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26","Type":"ContainerStarted","Data":"07f2c899e94841a9469b670a950d98f5fdd1ecf52f9bb42ea3bf534141dc91b2"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.913678 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.913924 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vhkg4" event={"ID":"cd120629-d064-4ce0-a5d2-73656425765f","Type":"ContainerStarted","Data":"bc2d0c5becdf094211ca70544fb415732deded48f948d404967beaece2be777e"} Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.924151 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.957861 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:18:06 crc kubenswrapper[4740]: I0130 16:18:06.959197 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x8x4q" podStartSLOduration=3.958385813 podStartE2EDuration="3.958385813s" podCreationTimestamp="2026-01-30 16:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:06.932967372 +0000 UTC m=+1335.570029981" watchObservedRunningTime="2026-01-30 16:18:06.958385813 +0000 UTC m=+1335.595448412" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.001961 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vhkg4" podStartSLOduration=4.001937345 podStartE2EDuration="4.001937345s" podCreationTimestamp="2026-01-30 16:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:06.978941184 +0000 UTC m=+1335.616003783" watchObservedRunningTime="2026-01-30 16:18:07.001937345 +0000 UTC m=+1335.638999944" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.169450 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.198469 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.229422 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqn5\" (UniqueName: \"kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.229603 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.229642 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.229715 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.229740 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd27b\" (UniqueName: \"kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233000 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs" (OuterVolumeSpecName: "logs") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233199 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233305 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233742 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.234061 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.234270 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs\") pod \"874bf537-5ab4-4254-97e4-294440aa41ff\" (UID: \"874bf537-5ab4-4254-97e4-294440aa41ff\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.234419 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.235485 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.235600 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts\") pod \"04e1512f-f234-4bd3-8139-8affc693b3d6\" (UID: \"04e1512f-f234-4bd3-8139-8affc693b3d6\") " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.236715 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.233708 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.238793 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.239058 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs" (OuterVolumeSpecName: "logs") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.256213 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts" (OuterVolumeSpecName: "scripts") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.260877 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b" (OuterVolumeSpecName: "kube-api-access-sd27b") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "kube-api-access-sd27b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.262009 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.262024 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data" (OuterVolumeSpecName: "config-data") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.263580 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data" (OuterVolumeSpecName: "config-data") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.267112 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5" (OuterVolumeSpecName: "kube-api-access-hvqn5") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "kube-api-access-hvqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.267173 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.270790 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts" (OuterVolumeSpecName: "scripts") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.341694 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (OuterVolumeSpecName: "glance") pod "874bf537-5ab4-4254-97e4-294440aa41ff" (UID: "874bf537-5ab4-4254-97e4-294440aa41ff"). InnerVolumeSpecName "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.347783 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqn5\" (UniqueName: \"kubernetes.io/projected/874bf537-5ab4-4254-97e4-294440aa41ff-kube-api-access-hvqn5\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.348461 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.348750 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd27b\" (UniqueName: \"kubernetes.io/projected/04e1512f-f234-4bd3-8139-8affc693b3d6-kube-api-access-sd27b\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.348846 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.348902 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.348969 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.349027 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.349081 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/874bf537-5ab4-4254-97e4-294440aa41ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.349131 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/874bf537-5ab4-4254-97e4-294440aa41ff-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.349183 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04e1512f-f234-4bd3-8139-8affc693b3d6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.349236 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.347960 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (OuterVolumeSpecName: "glance") pod "04e1512f-f234-4bd3-8139-8affc693b3d6" (UID: "04e1512f-f234-4bd3-8139-8affc693b3d6"). InnerVolumeSpecName "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.350184 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e1512f-f234-4bd3-8139-8affc693b3d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.386892 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5163ee6-5fe7-40e5-9912-992b75276183" path="/var/lib/kubelet/pods/b5163ee6-5fe7-40e5-9912-992b75276183/volumes" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.391141 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.391323 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3") on node "crc" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.452859 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" " Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.452898 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.499758 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.502570 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4") on node "crc" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.556789 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.936743 4740 generic.go:334] "Generic (PLEG): container finished" podID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerID="ba97843e560710ef3a46741da6387fb3e1a32c449c73e975819dbf7ceac2c379" exitCode=0 Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.936816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" event={"ID":"e2398e05-2c84-4851-922a-3e6a7c9e3994","Type":"ContainerDied","Data":"ba97843e560710ef3a46741da6387fb3e1a32c449c73e975819dbf7ceac2c379"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.936845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" event={"ID":"e2398e05-2c84-4851-922a-3e6a7c9e3994","Type":"ContainerStarted","Data":"cec197cc74331c524124cf1457a2e1c116fca3223331dcfe091c0cfb8f1929ff"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.941166 4740 generic.go:334] "Generic (PLEG): container finished" podID="169e80d0-763e-400e-9369-bd048b982484" containerID="1d7df150bcc08e642ec8f9eb0dc85a4ce76e7528cef3efe6fecb4ab05e6188d1" exitCode=0 Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.941253 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" event={"ID":"169e80d0-763e-400e-9369-bd048b982484","Type":"ContainerDied","Data":"1d7df150bcc08e642ec8f9eb0dc85a4ce76e7528cef3efe6fecb4ab05e6188d1"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.950377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerStarted","Data":"239fda0a97f69a459b895a9765a6fb59174f1d4660ea0e72a19a91b9de60faf6"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.953291 4740 generic.go:334] "Generic (PLEG): container finished" podID="076ac73c-b165-4f36-86d0-9c1765872335" containerID="2ca399e80f718cf2d3fe4351fed6f4f8364e0e56c78fd26d207b1c42d0974bab" exitCode=0 Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.953554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" event={"ID":"076ac73c-b165-4f36-86d0-9c1765872335","Type":"ContainerDied","Data":"2ca399e80f718cf2d3fe4351fed6f4f8364e0e56c78fd26d207b1c42d0974bab"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.956339 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vhkg4" event={"ID":"cd120629-d064-4ce0-a5d2-73656425765f","Type":"ContainerStarted","Data":"746789ef9658790b905a92511842cb3dd5f1417a1a034f491db8cf9b203b0a98"} Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.956768 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:07 crc kubenswrapper[4740]: I0130 16:18:07.956864 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.370574 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.434512 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.484044 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.514094 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.523013 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.528819 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.529111 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h2bf2" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.529551 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.549049 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.554884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.554964 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.555063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.555094 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.555182 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.555234 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.555254 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-467qp\" (UniqueName: \"kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.582793 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.587021 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.632141 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: E0130 16:18:08.633277 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076ac73c-b165-4f36-86d0-9c1765872335" containerName="init" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.633303 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="076ac73c-b165-4f36-86d0-9c1765872335" containerName="init" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.633599 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="076ac73c-b165-4f36-86d0-9c1765872335" containerName="init" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.635118 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.638072 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.670936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.671393 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.671901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.671971 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.671995 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-467qp\" (UniqueName: \"kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672369 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672410 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ffb\" (UniqueName: \"kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672452 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672536 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.672623 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.678040 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.678249 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.678965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.679227 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.681275 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.682041 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.696869 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.696932 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7b0e4f5e4f3aeb77e4c3eeab8492d1ed4d740072e82a0a742970a29e35f2749/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.702745 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-467qp\" (UniqueName: \"kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.727302 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.773902 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.774091 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.791677 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.791785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.791876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp2m6\" (UniqueName: \"kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.793243 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.793365 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config\") pod \"076ac73c-b165-4f36-86d0-9c1765872335\" (UID: \"076ac73c-b165-4f36-86d0-9c1765872335\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.794412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.794670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.794767 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.794921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.794977 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2ffb\" (UniqueName: \"kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.795101 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.795248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.802211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.802846 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.802897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.805497 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6" (OuterVolumeSpecName: "kube-api-access-fp2m6") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "kube-api-access-fp2m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.811481 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.811552 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fa5ce19fcc327994c70afc2a90d04a285bdf3051c4029a47293957e337f4a5/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.812163 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.814434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.864712 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2ffb\" (UniqueName: \"kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.875532 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.880595 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.892804 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897013 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897104 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897272 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg64\" (UniqueName: \"kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897449 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897533 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.897564 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config\") pod \"169e80d0-763e-400e-9369-bd048b982484\" (UID: \"169e80d0-763e-400e-9369-bd048b982484\") " Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.898731 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.898756 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.898795 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp2m6\" (UniqueName: \"kubernetes.io/projected/076ac73c-b165-4f36-86d0-9c1765872335-kube-api-access-fp2m6\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.925648 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64" (OuterVolumeSpecName: "kube-api-access-kwg64") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "kube-api-access-kwg64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.930412 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.959928 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config" (OuterVolumeSpecName: "config") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.960381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.964801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.970507 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.985381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.995194 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config" (OuterVolumeSpecName: "config") pod "076ac73c-b165-4f36-86d0-9c1765872335" (UID: "076ac73c-b165-4f36-86d0-9c1765872335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:08 crc kubenswrapper[4740]: I0130 16:18:08.998130 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003700 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg64\" (UniqueName: \"kubernetes.io/projected/169e80d0-763e-400e-9369-bd048b982484-kube-api-access-kwg64\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003741 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003754 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003769 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003782 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003793 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.003804 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/076ac73c-b165-4f36-86d0-9c1765872335-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.006620 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.008574 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "169e80d0-763e-400e-9369-bd048b982484" (UID: "169e80d0-763e-400e-9369-bd048b982484"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.047085 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" event={"ID":"e2398e05-2c84-4851-922a-3e6a7c9e3994","Type":"ContainerStarted","Data":"2cc7fe78c8ee9c4350e085f05da149ad6bbbdebfefa0da3f35df103e4d9ab8cd"} Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.047194 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.082556 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" event={"ID":"169e80d0-763e-400e-9369-bd048b982484","Type":"ContainerDied","Data":"0e90422b74f3780b4c4903b7b89150267a30b429417d864d19dea975681d8275"} Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.082651 4740 scope.go:117] "RemoveContainer" containerID="1d7df150bcc08e642ec8f9eb0dc85a4ce76e7528cef3efe6fecb4ab05e6188d1" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.082673 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-svzr5" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.085685 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" podStartSLOduration=4.085661596 podStartE2EDuration="4.085661596s" podCreationTimestamp="2026-01-30 16:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:09.079985955 +0000 UTC m=+1337.717048574" watchObservedRunningTime="2026-01-30 16:18:09.085661596 +0000 UTC m=+1337.722724195" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.101043 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.101910 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-vrmdv" event={"ID":"076ac73c-b165-4f36-86d0-9c1765872335","Type":"ContainerDied","Data":"751093be277014c2d719d82d4d94582de135957a3acce49943bca4b9accbbb8a"} Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.107119 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.107197 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/169e80d0-763e-400e-9369-bd048b982484-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.220911 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.316084 4740 scope.go:117] "RemoveContainer" containerID="2ca399e80f718cf2d3fe4351fed6f4f8364e0e56c78fd26d207b1c42d0974bab" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.403478 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e1512f-f234-4bd3-8139-8affc693b3d6" path="/var/lib/kubelet/pods/04e1512f-f234-4bd3-8139-8affc693b3d6/volumes" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.404105 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874bf537-5ab4-4254-97e4-294440aa41ff" path="/var/lib/kubelet/pods/874bf537-5ab4-4254-97e4-294440aa41ff/volumes" Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.404848 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-svzr5"] Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.440726 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.460329 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-vrmdv"] Jan 30 16:18:09 crc kubenswrapper[4740]: I0130 16:18:09.952746 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:10 crc kubenswrapper[4740]: W0130 16:18:10.027824 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d103756_0111_4d00_bff2_438bfdaa8037.slice/crio-c7109d9af77df124136e090cc7b1411f0705c35d05359548868fb4cb045950f7 WatchSource:0}: Error finding container c7109d9af77df124136e090cc7b1411f0705c35d05359548868fb4cb045950f7: Status 404 returned error can't find the container with id c7109d9af77df124136e090cc7b1411f0705c35d05359548868fb4cb045950f7 Jan 30 16:18:10 crc kubenswrapper[4740]: I0130 16:18:10.121234 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerStarted","Data":"c7109d9af77df124136e090cc7b1411f0705c35d05359548868fb4cb045950f7"} Jan 30 16:18:10 crc kubenswrapper[4740]: I0130 16:18:10.160157 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:11 crc kubenswrapper[4740]: I0130 16:18:11.163983 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerStarted","Data":"4c418bcb3a2dfdc3dcee43796cc289b82666e57d2572fa9fd7436b12b9335ba8"} Jan 30 16:18:11 crc kubenswrapper[4740]: I0130 16:18:11.164590 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerStarted","Data":"59c201289237516ae602054662e56ab4ff07c254c9f393ae23acdddbdfd46d27"} Jan 30 16:18:11 crc kubenswrapper[4740]: I0130 16:18:11.167397 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerStarted","Data":"51d3ad2b7f79e1a2546054e8ae30342a2dfdf2bf69006cbf6c965c8f25bc5659"} Jan 30 16:18:11 crc kubenswrapper[4740]: I0130 16:18:11.366793 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076ac73c-b165-4f36-86d0-9c1765872335" path="/var/lib/kubelet/pods/076ac73c-b165-4f36-86d0-9c1765872335/volumes" Jan 30 16:18:11 crc kubenswrapper[4740]: I0130 16:18:11.367401 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169e80d0-763e-400e-9369-bd048b982484" path="/var/lib/kubelet/pods/169e80d0-763e-400e-9369-bd048b982484/volumes" Jan 30 16:18:12 crc kubenswrapper[4740]: I0130 16:18:12.195055 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerStarted","Data":"74be9ba40c927ba10b05ac1d3be11eceb611b7bef72193a2bc6d81048ed2b3ac"} Jan 30 16:18:12 crc kubenswrapper[4740]: I0130 16:18:12.229779 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.229754042 podStartE2EDuration="4.229754042s" podCreationTimestamp="2026-01-30 16:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:12.228531661 +0000 UTC m=+1340.865594260" watchObservedRunningTime="2026-01-30 16:18:12.229754042 +0000 UTC m=+1340.866816641" Jan 30 16:18:13 crc kubenswrapper[4740]: I0130 16:18:13.218209 4740 generic.go:334] "Generic (PLEG): container finished" podID="a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" containerID="07f2c899e94841a9469b670a950d98f5fdd1ecf52f9bb42ea3bf534141dc91b2" exitCode=0 Jan 30 16:18:13 crc kubenswrapper[4740]: I0130 16:18:13.218313 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8x4q" event={"ID":"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26","Type":"ContainerDied","Data":"07f2c899e94841a9469b670a950d98f5fdd1ecf52f9bb42ea3bf534141dc91b2"} Jan 30 16:18:13 crc kubenswrapper[4740]: I0130 16:18:13.398990 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:13 crc kubenswrapper[4740]: I0130 16:18:13.551856 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:14 crc kubenswrapper[4740]: I0130 16:18:14.252484 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-log" containerID="cri-o://4c418bcb3a2dfdc3dcee43796cc289b82666e57d2572fa9fd7436b12b9335ba8" gracePeriod=30 Jan 30 16:18:14 crc kubenswrapper[4740]: I0130 16:18:14.253318 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-httpd" containerID="cri-o://74be9ba40c927ba10b05ac1d3be11eceb611b7bef72193a2bc6d81048ed2b3ac" gracePeriod=30 Jan 30 16:18:14 crc kubenswrapper[4740]: I0130 16:18:14.253436 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerStarted","Data":"7bc26141dfe85ac8e19df62e8fcac9862644980550417afddbdc543457d146d4"} Jan 30 16:18:14 crc kubenswrapper[4740]: I0130 16:18:14.293590 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.293559839 podStartE2EDuration="6.293559839s" podCreationTimestamp="2026-01-30 16:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:14.281042888 +0000 UTC m=+1342.918105487" watchObservedRunningTime="2026-01-30 16:18:14.293559839 +0000 UTC m=+1342.930622438" Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.273868 4740 generic.go:334] "Generic (PLEG): container finished" podID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerID="74be9ba40c927ba10b05ac1d3be11eceb611b7bef72193a2bc6d81048ed2b3ac" exitCode=0 Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.274291 4740 generic.go:334] "Generic (PLEG): container finished" podID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerID="4c418bcb3a2dfdc3dcee43796cc289b82666e57d2572fa9fd7436b12b9335ba8" exitCode=143 Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.273948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerDied","Data":"74be9ba40c927ba10b05ac1d3be11eceb611b7bef72193a2bc6d81048ed2b3ac"} Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.274364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerDied","Data":"4c418bcb3a2dfdc3dcee43796cc289b82666e57d2572fa9fd7436b12b9335ba8"} Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.274560 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-log" containerID="cri-o://51d3ad2b7f79e1a2546054e8ae30342a2dfdf2bf69006cbf6c965c8f25bc5659" gracePeriod=30 Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.275255 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-httpd" containerID="cri-o://7bc26141dfe85ac8e19df62e8fcac9862644980550417afddbdc543457d146d4" gracePeriod=30 Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.784664 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.880467 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.880821 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" containerID="cri-o://5501f2154f56c5db9a022bd9b55e9851c0a9d308a5a598b35bb802d0310faac0" gracePeriod=10 Jan 30 16:18:15 crc kubenswrapper[4740]: I0130 16:18:15.976198 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.316469 4740 generic.go:334] "Generic (PLEG): container finished" podID="5d103756-0111-4d00-bff2-438bfdaa8037" containerID="7bc26141dfe85ac8e19df62e8fcac9862644980550417afddbdc543457d146d4" exitCode=0 Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.316504 4740 generic.go:334] "Generic (PLEG): container finished" podID="5d103756-0111-4d00-bff2-438bfdaa8037" containerID="51d3ad2b7f79e1a2546054e8ae30342a2dfdf2bf69006cbf6c965c8f25bc5659" exitCode=143 Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.316544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerDied","Data":"7bc26141dfe85ac8e19df62e8fcac9862644980550417afddbdc543457d146d4"} Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.316591 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerDied","Data":"51d3ad2b7f79e1a2546054e8ae30342a2dfdf2bf69006cbf6c965c8f25bc5659"} Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.324782 4740 generic.go:334] "Generic (PLEG): container finished" podID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerID="5501f2154f56c5db9a022bd9b55e9851c0a9d308a5a598b35bb802d0310faac0" exitCode=0 Jan 30 16:18:16 crc kubenswrapper[4740]: I0130 16:18:16.324862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" event={"ID":"2fa91b0b-2b2c-4e4e-8ed0-a51652314374","Type":"ContainerDied","Data":"5501f2154f56c5db9a022bd9b55e9851c0a9d308a5a598b35bb802d0310faac0"} Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.928440 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974123 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974195 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974307 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qmrp\" (UniqueName: \"kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974406 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974518 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.974543 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys\") pod \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\" (UID: \"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26\") " Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.982762 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp" (OuterVolumeSpecName: "kube-api-access-6qmrp") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "kube-api-access-6qmrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.985547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.986600 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts" (OuterVolumeSpecName: "scripts") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:20 crc kubenswrapper[4740]: I0130 16:18:20.995625 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.015882 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data" (OuterVolumeSpecName: "config-data") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.016894 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" (UID: "a109e6f9-4cfc-4f81-b86e-7813b5f4ff26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078618 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078663 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078682 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qmrp\" (UniqueName: \"kubernetes.io/projected/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-kube-api-access-6qmrp\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078696 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078710 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.078723 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.384042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8x4q" event={"ID":"a109e6f9-4cfc-4f81-b86e-7813b5f4ff26","Type":"ContainerDied","Data":"b4a1e7ca229ab05b2aca935112e42c8e907d5ee11134daef325fd33fc378bbdc"} Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.384098 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a1e7ca229ab05b2aca935112e42c8e907d5ee11134daef325fd33fc378bbdc" Jan 30 16:18:21 crc kubenswrapper[4740]: I0130 16:18:21.384197 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8x4q" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.062277 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x8x4q"] Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.080152 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x8x4q"] Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.125895 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rq44b"] Jan 30 16:18:22 crc kubenswrapper[4740]: E0130 16:18:22.126401 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" containerName="keystone-bootstrap" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.126425 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" containerName="keystone-bootstrap" Jan 30 16:18:22 crc kubenswrapper[4740]: E0130 16:18:22.126456 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169e80d0-763e-400e-9369-bd048b982484" containerName="init" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.126463 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="169e80d0-763e-400e-9369-bd048b982484" containerName="init" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.126677 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" containerName="keystone-bootstrap" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.126703 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="169e80d0-763e-400e-9369-bd048b982484" containerName="init" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.127463 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.132332 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.132600 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n88sh" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.132802 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.133195 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.141959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rq44b"] Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.227441 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7m75\" (UniqueName: \"kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.227981 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.228019 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.228087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.228108 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.228160 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7m75\" (UniqueName: \"kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330453 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330525 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330549 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.330585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.336059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.336340 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.336792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.337336 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.338117 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.349740 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7m75\" (UniqueName: \"kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75\") pod \"keystone-bootstrap-rq44b\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:22 crc kubenswrapper[4740]: I0130 16:18:22.452908 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:18:23 crc kubenswrapper[4740]: I0130 16:18:23.357557 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a109e6f9-4cfc-4f81-b86e-7813b5f4ff26" path="/var/lib/kubelet/pods/a109e6f9-4cfc-4f81-b86e-7813b5f4ff26/volumes" Jan 30 16:18:24 crc kubenswrapper[4740]: I0130 16:18:24.454983 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:18:24 crc kubenswrapper[4740]: I0130 16:18:24.455503 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:18:25 crc kubenswrapper[4740]: I0130 16:18:25.976194 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.642650 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.764985 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb\") pod \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.765060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config\") pod \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.765122 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-479mc\" (UniqueName: \"kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc\") pod \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.765204 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc\") pod \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.765374 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb\") pod \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\" (UID: \"2fa91b0b-2b2c-4e4e-8ed0-a51652314374\") " Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.773562 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc" (OuterVolumeSpecName: "kube-api-access-479mc") pod "2fa91b0b-2b2c-4e4e-8ed0-a51652314374" (UID: "2fa91b0b-2b2c-4e4e-8ed0-a51652314374"). InnerVolumeSpecName "kube-api-access-479mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.827807 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2fa91b0b-2b2c-4e4e-8ed0-a51652314374" (UID: "2fa91b0b-2b2c-4e4e-8ed0-a51652314374"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.829527 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fa91b0b-2b2c-4e4e-8ed0-a51652314374" (UID: "2fa91b0b-2b2c-4e4e-8ed0-a51652314374"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.840739 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config" (OuterVolumeSpecName: "config") pod "2fa91b0b-2b2c-4e4e-8ed0-a51652314374" (UID: "2fa91b0b-2b2c-4e4e-8ed0-a51652314374"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.842572 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2fa91b0b-2b2c-4e4e-8ed0-a51652314374" (UID: "2fa91b0b-2b2c-4e4e-8ed0-a51652314374"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.868982 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.869018 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.869033 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.869050 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-479mc\" (UniqueName: \"kubernetes.io/projected/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-kube-api-access-479mc\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:27 crc kubenswrapper[4740]: I0130 16:18:27.869064 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa91b0b-2b2c-4e4e-8ed0-a51652314374-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:28 crc kubenswrapper[4740]: I0130 16:18:28.465619 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" event={"ID":"2fa91b0b-2b2c-4e4e-8ed0-a51652314374","Type":"ContainerDied","Data":"46db931dc6c44aa8f35a6434e68f8aa8b09e88a52f900e61eec90164afc2eaba"} Jan 30 16:18:28 crc kubenswrapper[4740]: I0130 16:18:28.466134 4740 scope.go:117] "RemoveContainer" containerID="5501f2154f56c5db9a022bd9b55e9851c0a9d308a5a598b35bb802d0310faac0" Jan 30 16:18:28 crc kubenswrapper[4740]: I0130 16:18:28.465785 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" Jan 30 16:18:28 crc kubenswrapper[4740]: I0130 16:18:28.518059 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:18:28 crc kubenswrapper[4740]: I0130 16:18:28.528961 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xnxb4"] Jan 30 16:18:29 crc kubenswrapper[4740]: I0130 16:18:29.381985 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" path="/var/lib/kubelet/pods/2fa91b0b-2b2c-4e4e-8ed0-a51652314374/volumes" Jan 30 16:18:30 crc kubenswrapper[4740]: I0130 16:18:30.978371 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xnxb4" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.098009 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146221 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146296 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2ffb\" (UniqueName: \"kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146562 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146677 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146827 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146858 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.146909 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data\") pod \"5d103756-0111-4d00-bff2-438bfdaa8037\" (UID: \"5d103756-0111-4d00-bff2-438bfdaa8037\") " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.147179 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs" (OuterVolumeSpecName: "logs") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.147403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.148295 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.148322 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d103756-0111-4d00-bff2-438bfdaa8037-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.154514 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb" (OuterVolumeSpecName: "kube-api-access-k2ffb") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "kube-api-access-k2ffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.166188 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts" (OuterVolumeSpecName: "scripts") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.174869 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (OuterVolumeSpecName: "glance") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.194146 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.230766 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data" (OuterVolumeSpecName: "config-data") pod "5d103756-0111-4d00-bff2-438bfdaa8037" (UID: "5d103756-0111-4d00-bff2-438bfdaa8037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.252227 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2ffb\" (UniqueName: \"kubernetes.io/projected/5d103756-0111-4d00-bff2-438bfdaa8037-kube-api-access-k2ffb\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.252314 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" " Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.252330 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.252368 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.252379 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d103756-0111-4d00-bff2-438bfdaa8037-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.278602 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.278801 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3") on node "crc" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.354382 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.548442 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d103756-0111-4d00-bff2-438bfdaa8037","Type":"ContainerDied","Data":"c7109d9af77df124136e090cc7b1411f0705c35d05359548868fb4cb045950f7"} Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.548540 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.583342 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.601431 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.615415 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.615978 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-httpd" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616005 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-httpd" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.616018 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="init" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616029 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="init" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.616046 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616056 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.616069 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-log" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616076 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-log" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616384 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-log" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616405 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" containerName="glance-httpd" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.616533 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa91b0b-2b2c-4e4e-8ed0-a51652314374" containerName="dnsmasq-dns" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.621687 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.629017 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.629277 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.632497 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.663798 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.663876 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5k6\" (UniqueName: \"kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664448 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664530 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664676 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.664705 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.767398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768268 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768327 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768450 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768559 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br5k6\" (UniqueName: \"kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.768808 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.769159 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.771585 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.771624 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fa5ce19fcc327994c70afc2a90d04a285bdf3051c4029a47293957e337f4a5/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.775631 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.776739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.777771 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.780636 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.795862 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br5k6\" (UniqueName: \"kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.814051 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.814418 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb7lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2k9q9_openstack(cc1c912a-97a6-4de7-ad45-ced02c0f40e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:18:35 crc kubenswrapper[4740]: E0130 16:18:35.815992 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2k9q9" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.830471 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:18:35 crc kubenswrapper[4740]: I0130 16:18:35.950228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.285836 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384136 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384245 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-467qp\" (UniqueName: \"kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384517 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384579 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384634 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.384831 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\" (UID: \"69148244-f94e-4ae9-9240-8fbcd54aa0ca\") " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.385612 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.386549 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.386913 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs" (OuterVolumeSpecName: "logs") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.389828 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp" (OuterVolumeSpecName: "kube-api-access-467qp") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "kube-api-access-467qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.400011 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts" (OuterVolumeSpecName: "scripts") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.412257 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (OuterVolumeSpecName: "glance") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.422897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.454632 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data" (OuterVolumeSpecName: "config-data") pod "69148244-f94e-4ae9-9240-8fbcd54aa0ca" (UID: "69148244-f94e-4ae9-9240-8fbcd54aa0ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.488926 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-467qp\" (UniqueName: \"kubernetes.io/projected/69148244-f94e-4ae9-9240-8fbcd54aa0ca-kube-api-access-467qp\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.488964 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.488976 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69148244-f94e-4ae9-9240-8fbcd54aa0ca-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.489015 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" " Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.489027 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.489037 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69148244-f94e-4ae9-9240-8fbcd54aa0ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.515303 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.515561 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4") on node "crc" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.568885 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"69148244-f94e-4ae9-9240-8fbcd54aa0ca","Type":"ContainerDied","Data":"59c201289237516ae602054662e56ab4ff07c254c9f393ae23acdddbdfd46d27"} Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.568956 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: E0130 16:18:36.571387 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2k9q9" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.591716 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" DevicePath \"\"" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.650676 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.668365 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.735822 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:36 crc kubenswrapper[4740]: E0130 16:18:36.736454 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-log" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.736473 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-log" Jan 30 16:18:36 crc kubenswrapper[4740]: E0130 16:18:36.736507 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-httpd" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.736514 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-httpd" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.736765 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-httpd" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.736789 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" containerName="glance-log" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.738294 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.743213 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.743581 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.754795 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.802793 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.802880 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.802955 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pgsz\" (UniqueName: \"kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.803017 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.803757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.803863 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.803905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.804001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.907325 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.907454 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.908279 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.908375 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.908431 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.908553 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.910486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.910844 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.911312 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.913981 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pgsz\" (UniqueName: \"kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.915249 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.916051 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.916407 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.916449 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7b0e4f5e4f3aeb77e4c3eeab8492d1ed4d740072e82a0a742970a29e35f2749/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.916563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.922074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.935331 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pgsz\" (UniqueName: \"kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:36 crc kubenswrapper[4740]: I0130 16:18:36.992596 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " pod="openstack/glance-default-external-api-0" Jan 30 16:18:37 crc kubenswrapper[4740]: I0130 16:18:37.071849 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:18:37 crc kubenswrapper[4740]: I0130 16:18:37.363335 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d103756-0111-4d00-bff2-438bfdaa8037" path="/var/lib/kubelet/pods/5d103756-0111-4d00-bff2-438bfdaa8037/volumes" Jan 30 16:18:37 crc kubenswrapper[4740]: I0130 16:18:37.364536 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69148244-f94e-4ae9-9240-8fbcd54aa0ca" path="/var/lib/kubelet/pods/69148244-f94e-4ae9-9240-8fbcd54aa0ca/volumes" Jan 30 16:18:37 crc kubenswrapper[4740]: E0130 16:18:37.743514 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 16:18:37 crc kubenswrapper[4740]: E0130 16:18:37.743734 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx24p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pkfjm_openstack(2754b498-304b-47aa-a2d3-71a9c2f70e8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:18:37 crc kubenswrapper[4740]: E0130 16:18:37.744955 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pkfjm" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" Jan 30 16:18:38 crc kubenswrapper[4740]: E0130 16:18:38.593381 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pkfjm" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" Jan 30 16:18:40 crc kubenswrapper[4740]: I0130 16:18:40.309811 4740 scope.go:117] "RemoveContainer" containerID="55e2d4922aacd1b5580a88b81c4e25511c66343df8589fcc559174aab1a8e482" Jan 30 16:18:40 crc kubenswrapper[4740]: I0130 16:18:40.946269 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rq44b"] Jan 30 16:18:50 crc kubenswrapper[4740]: I0130 16:18:50.338567 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:18:51 crc kubenswrapper[4740]: W0130 16:18:51.943960 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8f73dc_23d4_4221_b0a4_a76f2373e7b7.slice/crio-ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13 WatchSource:0}: Error finding container ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13: Status 404 returned error can't find the container with id ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13 Jan 30 16:18:51 crc kubenswrapper[4740]: I0130 16:18:51.994739 4740 scope.go:117] "RemoveContainer" containerID="7bc26141dfe85ac8e19df62e8fcac9862644980550417afddbdc543457d146d4" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.229408 4740 scope.go:117] "RemoveContainer" containerID="51d3ad2b7f79e1a2546054e8ae30342a2dfdf2bf69006cbf6c965c8f25bc5659" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.259468 4740 scope.go:117] "RemoveContainer" containerID="74be9ba40c927ba10b05ac1d3be11eceb611b7bef72193a2bc6d81048ed2b3ac" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.296972 4740 scope.go:117] "RemoveContainer" containerID="4c418bcb3a2dfdc3dcee43796cc289b82666e57d2572fa9fd7436b12b9335ba8" Jan 30 16:18:52 crc kubenswrapper[4740]: E0130 16:18:52.587500 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 30 16:18:52 crc kubenswrapper[4740]: E0130 16:18:52.587575 4740 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Jan 30 16:18:52 crc kubenswrapper[4740]: E0130 16:18:52.587743 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwpt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-lfz95_openstack(29755348-1e90-4436-8a60-a2823c2804fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:18:52 crc kubenswrapper[4740]: E0130 16:18:52.589131 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-lfz95" podUID="29755348-1e90-4436-8a60-a2823c2804fd" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.634224 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:18:52 crc kubenswrapper[4740]: W0130 16:18:52.639453 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbe1d10c_c40d_4b5a_bc95_10495060deb7.slice/crio-33151f480a33123807d56072ce6fdf8cc510da9894760a99d39a477935ab1ff3 WatchSource:0}: Error finding container 33151f480a33123807d56072ce6fdf8cc510da9894760a99d39a477935ab1ff3: Status 404 returned error can't find the container with id 33151f480a33123807d56072ce6fdf8cc510da9894760a99d39a477935ab1ff3 Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.775467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq44b" event={"ID":"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7","Type":"ContainerStarted","Data":"98024c68ffaac917ede6a544df17eafb9a8539d6c1de2916e3234db2e97cd801"} Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.775545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq44b" event={"ID":"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7","Type":"ContainerStarted","Data":"ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13"} Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.785944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ltwk6" event={"ID":"e68ec665-a90a-4332-8e78-79f658776815","Type":"ContainerStarted","Data":"71e26fddcac7ac2d674d9e4181c99c52d0326d5ccdd828adf918194a7bf2f30d"} Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.794017 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerStarted","Data":"33151f480a33123807d56072ce6fdf8cc510da9894760a99d39a477935ab1ff3"} Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.799935 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rq44b" podStartSLOduration=30.799912905 podStartE2EDuration="30.799912905s" podCreationTimestamp="2026-01-30 16:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:18:52.799223758 +0000 UTC m=+1381.436286357" watchObservedRunningTime="2026-01-30 16:18:52.799912905 +0000 UTC m=+1381.436975504" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.800864 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerStarted","Data":"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340"} Jan 30 16:18:52 crc kubenswrapper[4740]: E0130 16:18:52.803375 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-lfz95" podUID="29755348-1e90-4436-8a60-a2823c2804fd" Jan 30 16:18:52 crc kubenswrapper[4740]: I0130 16:18:52.827680 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ltwk6" podStartSLOduration=17.592753292 podStartE2EDuration="48.827652123s" podCreationTimestamp="2026-01-30 16:18:04 +0000 UTC" firstStartedPulling="2026-01-30 16:18:06.409878156 +0000 UTC m=+1335.046940755" lastFinishedPulling="2026-01-30 16:18:37.644776987 +0000 UTC m=+1366.281839586" observedRunningTime="2026-01-30 16:18:52.817770718 +0000 UTC m=+1381.454833317" watchObservedRunningTime="2026-01-30 16:18:52.827652123 +0000 UTC m=+1381.464714722" Jan 30 16:18:53 crc kubenswrapper[4740]: I0130 16:18:53.629233 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:18:53 crc kubenswrapper[4740]: I0130 16:18:53.822305 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerStarted","Data":"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999"} Jan 30 16:18:54 crc kubenswrapper[4740]: I0130 16:18:54.454922 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:18:54 crc kubenswrapper[4740]: I0130 16:18:54.454981 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:18:55 crc kubenswrapper[4740]: I0130 16:18:55.848396 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerStarted","Data":"0496d16eefeb402932dd3365c16a286b0a4060ed68454e8bc53afc91408b80f4"} Jan 30 16:19:24 crc kubenswrapper[4740]: I0130 16:19:24.454514 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:19:24 crc kubenswrapper[4740]: I0130 16:19:24.457187 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:19:24 crc kubenswrapper[4740]: I0130 16:19:24.457422 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:19:24 crc kubenswrapper[4740]: I0130 16:19:24.458807 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:19:24 crc kubenswrapper[4740]: I0130 16:19:24.459045 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599" gracePeriod=600 Jan 30 16:19:25 crc kubenswrapper[4740]: I0130 16:19:25.200800 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599" exitCode=0 Jan 30 16:19:25 crc kubenswrapper[4740]: I0130 16:19:25.200851 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599"} Jan 30 16:19:25 crc kubenswrapper[4740]: I0130 16:19:25.200916 4740 scope.go:117] "RemoveContainer" containerID="7444545e175a90767b9873079c8fd1472b5f709bb77111922611dbabedd78e11" Jan 30 16:19:26 crc kubenswrapper[4740]: E0130 16:19:26.218932 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1268691984/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 30 16:19:26 crc kubenswrapper[4740]: E0130 16:19:26.219160 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h588hc5h5dh5f9h688h8h5f4h657h5c9h5c5h5c8h5ddh668hbdhc9h55h556h79h5ch595h58dh56h696h594h695h58fh5b7hd9h56dhb7hc4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mkq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(95fa343f-47ce-425f-a254-58264f0a3f6b): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1268691984/1\": happened during read: context canceled" logger="UnhandledError" Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.254924 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lfz95" event={"ID":"29755348-1e90-4436-8a60-a2823c2804fd","Type":"ContainerStarted","Data":"ed761ba196c2364e32d286b0ca3a603491fac5dfda56bf0c5bc389d529ae1342"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.285477 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-lfz95" podStartSLOduration=3.871224566 podStartE2EDuration="1m24.285453247s" podCreationTimestamp="2026-01-30 16:18:03 +0000 UTC" firstStartedPulling="2026-01-30 16:18:06.074560941 +0000 UTC m=+1334.711623540" lastFinishedPulling="2026-01-30 16:19:26.488789622 +0000 UTC m=+1415.125852221" observedRunningTime="2026-01-30 16:19:27.283850658 +0000 UTC m=+1415.920913257" watchObservedRunningTime="2026-01-30 16:19:27.285453247 +0000 UTC m=+1415.922515846" Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.304222 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerStarted","Data":"03fbd565c19c239e68a13cc8441944efbe25f635e25dd9da6511f4c2d193a2f8"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.321553 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.325505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k9q9" event={"ID":"cc1c912a-97a6-4de7-ad45-ced02c0f40e5","Type":"ContainerStarted","Data":"c1775948756214f31d782697a6482275f7f5e00820681667ea3e55feeea2dacd"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.332526 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerStarted","Data":"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.337463 4740 generic.go:334] "Generic (PLEG): container finished" podID="ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" containerID="98024c68ffaac917ede6a544df17eafb9a8539d6c1de2916e3234db2e97cd801" exitCode=0 Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.358505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq44b" event={"ID":"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7","Type":"ContainerDied","Data":"98024c68ffaac917ede6a544df17eafb9a8539d6c1de2916e3234db2e97cd801"} Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.434469 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2k9q9" podStartSLOduration=3.263330585 podStartE2EDuration="1m23.434435232s" podCreationTimestamp="2026-01-30 16:18:04 +0000 UTC" firstStartedPulling="2026-01-30 16:18:06.24569213 +0000 UTC m=+1334.882754729" lastFinishedPulling="2026-01-30 16:19:26.416796787 +0000 UTC m=+1415.053859376" observedRunningTime="2026-01-30 16:19:27.365136143 +0000 UTC m=+1416.002198732" watchObservedRunningTime="2026-01-30 16:19:27.434435232 +0000 UTC m=+1416.071497831" Jan 30 16:19:27 crc kubenswrapper[4740]: I0130 16:19:27.500415 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=51.500378047 podStartE2EDuration="51.500378047s" podCreationTimestamp="2026-01-30 16:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:27.393773793 +0000 UTC m=+1416.030836412" watchObservedRunningTime="2026-01-30 16:19:27.500378047 +0000 UTC m=+1416.137440656" Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.363310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerStarted","Data":"0863b88005332061e30181135cd3f294739f8eb27b01ba7183a625bbd06f214e"} Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.378670 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pkfjm" event={"ID":"2754b498-304b-47aa-a2d3-71a9c2f70e8e","Type":"ContainerStarted","Data":"1344ab6c54073eb0e098787b221c48c7db2b0e6b9a160e797a2cb2826f5bd461"} Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.387279 4740 generic.go:334] "Generic (PLEG): container finished" podID="e68ec665-a90a-4332-8e78-79f658776815" containerID="71e26fddcac7ac2d674d9e4181c99c52d0326d5ccdd828adf918194a7bf2f30d" exitCode=0 Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.387818 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ltwk6" event={"ID":"e68ec665-a90a-4332-8e78-79f658776815","Type":"ContainerDied","Data":"71e26fddcac7ac2d674d9e4181c99c52d0326d5ccdd828adf918194a7bf2f30d"} Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.434185 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=53.434163513 podStartE2EDuration="53.434163513s" podCreationTimestamp="2026-01-30 16:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:28.400936989 +0000 UTC m=+1417.037999588" watchObservedRunningTime="2026-01-30 16:19:28.434163513 +0000 UTC m=+1417.071226112" Jan 30 16:19:28 crc kubenswrapper[4740]: I0130 16:19:28.542557 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pkfjm" podStartSLOduration=4.303901044 podStartE2EDuration="1m24.54252709s" podCreationTimestamp="2026-01-30 16:18:04 +0000 UTC" firstStartedPulling="2026-01-30 16:18:06.0749142 +0000 UTC m=+1334.711976799" lastFinishedPulling="2026-01-30 16:19:26.313540246 +0000 UTC m=+1414.950602845" observedRunningTime="2026-01-30 16:19:28.442045128 +0000 UTC m=+1417.079107727" watchObservedRunningTime="2026-01-30 16:19:28.54252709 +0000 UTC m=+1417.179589689" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.105506 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199159 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199285 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199338 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199433 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199571 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7m75\" (UniqueName: \"kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.199664 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys\") pod \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\" (UID: \"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7\") " Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.208889 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75" (OuterVolumeSpecName: "kube-api-access-m7m75") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "kube-api-access-m7m75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.209015 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.209324 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.220774 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts" (OuterVolumeSpecName: "scripts") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.258699 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.273470 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data" (OuterVolumeSpecName: "config-data") pod "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" (UID: "ab8f73dc-23d4-4221-b0a4-a76f2373e7b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.305765 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.306054 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7m75\" (UniqueName: \"kubernetes.io/projected/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-kube-api-access-m7m75\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.306144 4740 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.306220 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.306332 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.306474 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.406027 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rq44b" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.406982 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rq44b" event={"ID":"ab8f73dc-23d4-4221-b0a4-a76f2373e7b7","Type":"ContainerDied","Data":"ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13"} Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.407011 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac69a79a63c8d8528f86142eacc27b53d2b747a327ec6179fd23bebcb22fbf13" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.655275 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-59f5786cfd-w4tqb"] Jan 30 16:19:29 crc kubenswrapper[4740]: E0130 16:19:29.656751 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" containerName="keystone-bootstrap" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.656782 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" containerName="keystone-bootstrap" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.657080 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" containerName="keystone-bootstrap" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.660087 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.670269 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.670997 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.671133 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n88sh" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.671290 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.680886 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59f5786cfd-w4tqb"] Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.671446 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.671536 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.738505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-internal-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.738570 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz57k\" (UniqueName: \"kubernetes.io/projected/7f54d2dc-eb88-4049-8f40-4605058f7feb-kube-api-access-nz57k\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.738948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-public-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.739071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-combined-ca-bundle\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.739159 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-scripts\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.739322 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-credential-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.739471 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-config-data\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.739504 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-fernet-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.848602 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-public-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.848715 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-combined-ca-bundle\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.848773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-scripts\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.848886 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-credential-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.848981 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-config-data\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.849011 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-fernet-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.849118 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-internal-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.849150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz57k\" (UniqueName: \"kubernetes.io/projected/7f54d2dc-eb88-4049-8f40-4605058f7feb-kube-api-access-nz57k\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.859797 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-scripts\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.860819 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-public-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.863467 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-config-data\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.863705 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-combined-ca-bundle\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.863898 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-internal-tls-certs\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.864220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-credential-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.872147 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz57k\" (UniqueName: \"kubernetes.io/projected/7f54d2dc-eb88-4049-8f40-4605058f7feb-kube-api-access-nz57k\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.872635 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f54d2dc-eb88-4049-8f40-4605058f7feb-fernet-keys\") pod \"keystone-59f5786cfd-w4tqb\" (UID: \"7f54d2dc-eb88-4049-8f40-4605058f7feb\") " pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:29 crc kubenswrapper[4740]: I0130 16:19:29.992630 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:32 crc kubenswrapper[4740]: I0130 16:19:32.453501 4740 generic.go:334] "Generic (PLEG): container finished" podID="cd120629-d064-4ce0-a5d2-73656425765f" containerID="746789ef9658790b905a92511842cb3dd5f1417a1a034f491db8cf9b203b0a98" exitCode=0 Jan 30 16:19:32 crc kubenswrapper[4740]: I0130 16:19:32.453600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vhkg4" event={"ID":"cd120629-d064-4ce0-a5d2-73656425765f","Type":"ContainerDied","Data":"746789ef9658790b905a92511842cb3dd5f1417a1a034f491db8cf9b203b0a98"} Jan 30 16:19:33 crc kubenswrapper[4740]: I0130 16:19:33.470650 4740 generic.go:334] "Generic (PLEG): container finished" podID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" containerID="c1775948756214f31d782697a6482275f7f5e00820681667ea3e55feeea2dacd" exitCode=0 Jan 30 16:19:33 crc kubenswrapper[4740]: I0130 16:19:33.470740 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k9q9" event={"ID":"cc1c912a-97a6-4de7-ad45-ced02c0f40e5","Type":"ContainerDied","Data":"c1775948756214f31d782697a6482275f7f5e00820681667ea3e55feeea2dacd"} Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.486944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vhkg4" event={"ID":"cd120629-d064-4ce0-a5d2-73656425765f","Type":"ContainerDied","Data":"bc2d0c5becdf094211ca70544fb415732deded48f948d404967beaece2be777e"} Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.487262 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2d0c5becdf094211ca70544fb415732deded48f948d404967beaece2be777e" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.488372 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ltwk6" event={"ID":"e68ec665-a90a-4332-8e78-79f658776815","Type":"ContainerDied","Data":"39fd0f4a8564ec517802c22737e8e4421c8cf2ea83cac203f5890f8ae73e35e0"} Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.488433 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39fd0f4a8564ec517802c22737e8e4421c8cf2ea83cac203f5890f8ae73e35e0" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.698106 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ltwk6" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.715822 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.839742 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle\") pod \"cd120629-d064-4ce0-a5d2-73656425765f\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.839842 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-894tt\" (UniqueName: \"kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt\") pod \"e68ec665-a90a-4332-8e78-79f658776815\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.839929 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8j4\" (UniqueName: \"kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4\") pod \"cd120629-d064-4ce0-a5d2-73656425765f\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.839988 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs\") pod \"e68ec665-a90a-4332-8e78-79f658776815\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.840156 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle\") pod \"e68ec665-a90a-4332-8e78-79f658776815\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.840326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts\") pod \"e68ec665-a90a-4332-8e78-79f658776815\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.840451 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config\") pod \"cd120629-d064-4ce0-a5d2-73656425765f\" (UID: \"cd120629-d064-4ce0-a5d2-73656425765f\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.840530 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data\") pod \"e68ec665-a90a-4332-8e78-79f658776815\" (UID: \"e68ec665-a90a-4332-8e78-79f658776815\") " Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.843101 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs" (OuterVolumeSpecName: "logs") pod "e68ec665-a90a-4332-8e78-79f658776815" (UID: "e68ec665-a90a-4332-8e78-79f658776815"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.869882 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt" (OuterVolumeSpecName: "kube-api-access-894tt") pod "e68ec665-a90a-4332-8e78-79f658776815" (UID: "e68ec665-a90a-4332-8e78-79f658776815"). InnerVolumeSpecName "kube-api-access-894tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.871547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts" (OuterVolumeSpecName: "scripts") pod "e68ec665-a90a-4332-8e78-79f658776815" (UID: "e68ec665-a90a-4332-8e78-79f658776815"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.886617 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4" (OuterVolumeSpecName: "kube-api-access-kv8j4") pod "cd120629-d064-4ce0-a5d2-73656425765f" (UID: "cd120629-d064-4ce0-a5d2-73656425765f"). InnerVolumeSpecName "kube-api-access-kv8j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.919509 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e68ec665-a90a-4332-8e78-79f658776815" (UID: "e68ec665-a90a-4332-8e78-79f658776815"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.919697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd120629-d064-4ce0-a5d2-73656425765f" (UID: "cd120629-d064-4ce0-a5d2-73656425765f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.949535 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8j4\" (UniqueName: \"kubernetes.io/projected/cd120629-d064-4ce0-a5d2-73656425765f-kube-api-access-kv8j4\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.949589 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e68ec665-a90a-4332-8e78-79f658776815-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.949606 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.953122 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.953145 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.953156 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-894tt\" (UniqueName: \"kubernetes.io/projected/e68ec665-a90a-4332-8e78-79f658776815-kube-api-access-894tt\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.955888 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data" (OuterVolumeSpecName: "config-data") pod "e68ec665-a90a-4332-8e78-79f658776815" (UID: "e68ec665-a90a-4332-8e78-79f658776815"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.984575 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:19:34 crc kubenswrapper[4740]: I0130 16:19:34.995060 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config" (OuterVolumeSpecName: "config") pod "cd120629-d064-4ce0-a5d2-73656425765f" (UID: "cd120629-d064-4ce0-a5d2-73656425765f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.055127 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd120629-d064-4ce0-a5d2-73656425765f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.055188 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68ec665-a90a-4332-8e78-79f658776815-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.156588 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle\") pod \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.157230 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb7lq\" (UniqueName: \"kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq\") pod \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.157528 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data\") pod \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\" (UID: \"cc1c912a-97a6-4de7-ad45-ced02c0f40e5\") " Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.163855 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc1c912a-97a6-4de7-ad45-ced02c0f40e5" (UID: "cc1c912a-97a6-4de7-ad45-ced02c0f40e5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.164192 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq" (OuterVolumeSpecName: "kube-api-access-kb7lq") pod "cc1c912a-97a6-4de7-ad45-ced02c0f40e5" (UID: "cc1c912a-97a6-4de7-ad45-ced02c0f40e5"). InnerVolumeSpecName "kube-api-access-kb7lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.166574 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59f5786cfd-w4tqb"] Jan 30 16:19:35 crc kubenswrapper[4740]: W0130 16:19:35.170533 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f54d2dc_eb88_4049_8f40_4605058f7feb.slice/crio-d1fd8e84596f5bb9a5c9619e22e0f7bd923067627e715a88738884396906424b WatchSource:0}: Error finding container d1fd8e84596f5bb9a5c9619e22e0f7bd923067627e715a88738884396906424b: Status 404 returned error can't find the container with id d1fd8e84596f5bb9a5c9619e22e0f7bd923067627e715a88738884396906424b Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.192319 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc1c912a-97a6-4de7-ad45-ced02c0f40e5" (UID: "cc1c912a-97a6-4de7-ad45-ced02c0f40e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.260258 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.260307 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb7lq\" (UniqueName: \"kubernetes.io/projected/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-kube-api-access-kb7lq\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.260326 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc1c912a-97a6-4de7-ad45-ced02c0f40e5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.506073 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerStarted","Data":"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4"} Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.508453 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59f5786cfd-w4tqb" event={"ID":"7f54d2dc-eb88-4049-8f40-4605058f7feb","Type":"ContainerStarted","Data":"5501b03dfbe8391e19a5af95b184e0c4a2ab29ed0b2b93bbf485edc4397745a8"} Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.508535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59f5786cfd-w4tqb" event={"ID":"7f54d2dc-eb88-4049-8f40-4605058f7feb","Type":"ContainerStarted","Data":"d1fd8e84596f5bb9a5c9619e22e0f7bd923067627e715a88738884396906424b"} Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.508594 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.513755 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2k9q9" event={"ID":"cc1c912a-97a6-4de7-ad45-ced02c0f40e5","Type":"ContainerDied","Data":"6fabf4cb37495f27c7a01306b6e2ca694cc4a37f89a3e0ad0665f16e2a89c249"} Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.514296 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fabf4cb37495f27c7a01306b6e2ca694cc4a37f89a3e0ad0665f16e2a89c249" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.513817 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2k9q9" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.513799 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ltwk6" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.513850 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vhkg4" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.533528 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-59f5786cfd-w4tqb" podStartSLOduration=6.533503588 podStartE2EDuration="6.533503588s" podCreationTimestamp="2026-01-30 16:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:35.527716714 +0000 UTC m=+1424.164779313" watchObservedRunningTime="2026-01-30 16:19:35.533503588 +0000 UTC m=+1424.170566187" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742085 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7cc6d874d7-q46r7"] Jan 30 16:19:35 crc kubenswrapper[4740]: E0130 16:19:35.742607 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68ec665-a90a-4332-8e78-79f658776815" containerName="placement-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742627 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68ec665-a90a-4332-8e78-79f658776815" containerName="placement-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: E0130 16:19:35.742639 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" containerName="barbican-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742648 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" containerName="barbican-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: E0130 16:19:35.742657 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd120629-d064-4ce0-a5d2-73656425765f" containerName="neutron-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742665 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd120629-d064-4ce0-a5d2-73656425765f" containerName="neutron-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742886 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" containerName="barbican-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742912 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd120629-d064-4ce0-a5d2-73656425765f" containerName="neutron-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.742929 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68ec665-a90a-4332-8e78-79f658776815" containerName="placement-db-sync" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.748793 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.765240 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cr9bj" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.765923 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.766120 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.793415 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cc6d874d7-q46r7"] Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.885470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06bc0d0f-04a5-4703-97a4-6d44ccc42006-logs\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.885537 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-combined-ca-bundle\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.885607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data-custom\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.885672 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.885771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckx9\" (UniqueName: \"kubernetes.io/projected/06bc0d0f-04a5-4703-97a4-6d44ccc42006-kube-api-access-hckx9\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.951973 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.952042 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.952057 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.952068 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988398 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06bc0d0f-04a5-4703-97a4-6d44ccc42006-logs\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988477 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-combined-ca-bundle\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988521 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data-custom\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988565 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988663 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckx9\" (UniqueName: \"kubernetes.io/projected/06bc0d0f-04a5-4703-97a4-6d44ccc42006-kube-api-access-hckx9\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.988432 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.992460 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:35 crc kubenswrapper[4740]: I0130 16:19:35.999231 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06bc0d0f-04a5-4703-97a4-6d44ccc42006-logs\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.009914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data-custom\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.012003 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-config-data\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.031097 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06bc0d0f-04a5-4703-97a4-6d44ccc42006-combined-ca-bundle\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.061367 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckx9\" (UniqueName: \"kubernetes.io/projected/06bc0d0f-04a5-4703-97a4-6d44ccc42006-kube-api-access-hckx9\") pod \"barbican-worker-7cc6d874d7-q46r7\" (UID: \"06bc0d0f-04a5-4703-97a4-6d44ccc42006\") " pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.086223 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cc6d874d7-q46r7" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.090881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.090956 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.090986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.091081 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xf8f\" (UniqueName: \"kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.091124 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.091161 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.186132 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-649cd9f6b8-lgj8x"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.195194 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.204809 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xf8f\" (UniqueName: \"kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206195 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206250 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206331 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206386 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.206412 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.208574 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.210441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.210924 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.211064 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.245178 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.258953 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.303509 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xf8f\" (UniqueName: \"kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f\") pod \"dnsmasq-dns-7c67bffd47-fcbp6\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.309898 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92b93f04-34e0-47a3-af34-cd7e7717c444-logs\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.323632 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.324222 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-combined-ca-bundle\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.324688 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92wf\" (UniqueName: \"kubernetes.io/projected/92b93f04-34e0-47a3-af34-cd7e7717c444-kube-api-access-b92wf\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.324840 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data-custom\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.314163 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-649cd9f6b8-lgj8x"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.331619 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.452365 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b92wf\" (UniqueName: \"kubernetes.io/projected/92b93f04-34e0-47a3-af34-cd7e7717c444-kube-api-access-b92wf\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.452430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data-custom\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.452645 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92b93f04-34e0-47a3-af34-cd7e7717c444-logs\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.452684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.452931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-combined-ca-bundle\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.467055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92b93f04-34e0-47a3-af34-cd7e7717c444-logs\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.484023 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data-custom\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.487167 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-combined-ca-bundle\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.505610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.508961 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92b93f04-34e0-47a3-af34-cd7e7717c444-config-data\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.649259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92wf\" (UniqueName: \"kubernetes.io/projected/92b93f04-34e0-47a3-af34-cd7e7717c444-kube-api-access-b92wf\") pod \"barbican-keystone-listener-649cd9f6b8-lgj8x\" (UID: \"92b93f04-34e0-47a3-af34-cd7e7717c444\") " pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.674873 4740 generic.go:334] "Generic (PLEG): container finished" podID="29755348-1e90-4436-8a60-a2823c2804fd" containerID="ed761ba196c2364e32d286b0ca3a603491fac5dfda56bf0c5bc389d529ae1342" exitCode=0 Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.677719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lfz95" event={"ID":"29755348-1e90-4436-8a60-a2823c2804fd","Type":"ContainerDied","Data":"ed761ba196c2364e32d286b0ca3a603491fac5dfda56bf0c5bc389d529ae1342"} Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.712673 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7745b764-mmpkw"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.718604 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.742464 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.742841 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cs2pd" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.742979 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.743409 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.743426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.768643 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7745b764-mmpkw"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.868438 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.870622 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.883327 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.898232 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.936935 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzlh\" (UniqueName: \"kubernetes.io/projected/f4b64e71-6b99-4f78-9636-4996a1e4ecee-kube-api-access-vdzlh\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.937311 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwpz\" (UniqueName: \"kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.939387 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.940234 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b64e71-6b99-4f78-9636-4996a1e4ecee-logs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.940367 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-combined-ca-bundle\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.940505 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-public-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.940629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-internal-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.940766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.939288 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.942288 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.942665 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.942751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-scripts\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.942881 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-config-data\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:36 crc kubenswrapper[4740]: I0130 16:19:36.973620 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.000653 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.003899 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.032005 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.032269 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.032498 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rm2xc" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.032656 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.063912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwpz\" (UniqueName: \"kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069539 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069579 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069730 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b64e71-6b99-4f78-9636-4996a1e4ecee-logs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069829 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-combined-ca-bundle\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsth\" (UniqueName: \"kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.069961 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-public-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-internal-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070136 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070157 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070278 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070282 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070302 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-scripts\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-config-data\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070603 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.070742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzlh\" (UniqueName: \"kubernetes.io/projected/f4b64e71-6b99-4f78-9636-4996a1e4ecee-kube-api-access-vdzlh\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.072186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.095294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-config-data\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.096326 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.098802 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-scripts\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.099630 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.099678 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.099693 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.099703 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.103413 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.107638 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b64e71-6b99-4f78-9636-4996a1e4ecee-logs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.123337 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-public-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.134540 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwpz\" (UniqueName: \"kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.134729 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.135022 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom\") pod \"barbican-api-bf9587c4-75g67\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.136434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzlh\" (UniqueName: \"kubernetes.io/projected/f4b64e71-6b99-4f78-9636-4996a1e4ecee-kube-api-access-vdzlh\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.138994 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-internal-tls-certs\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.152399 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b64e71-6b99-4f78-9636-4996a1e4ecee-combined-ca-bundle\") pod \"placement-7745b764-mmpkw\" (UID: \"f4b64e71-6b99-4f78-9636-4996a1e4ecee\") " pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.177787 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.180680 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.193941 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.194293 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.194326 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.194465 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsth\" (UniqueName: \"kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.194703 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.200761 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.222281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.231565 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.232694 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.233945 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.254121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.265827 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.274394 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsth\" (UniqueName: \"kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth\") pod \"neutron-64bd6c9fd8-9p6nz\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.274863 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301070 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301173 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301214 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301326 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn6nn\" (UniqueName: \"kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.301435 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.392539 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.401419 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cc6d874d7-q46r7"] Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403146 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn6nn\" (UniqueName: \"kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403225 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.403367 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.404908 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.414604 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.419154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.421438 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.415764 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.442949 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.458103 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn6nn\" (UniqueName: \"kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn\") pod \"dnsmasq-dns-848cf88cfc-wlggn\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.530166 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.575745 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:37 crc kubenswrapper[4740]: W0130 16:19:37.607660 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a47262_edfc_4f90_931b_8bba58325a3c.slice/crio-6ebdc59081de899c4b4f26ff9729e0fac02b01c7ee0d0239979690befcc1bb43 WatchSource:0}: Error finding container 6ebdc59081de899c4b4f26ff9729e0fac02b01c7ee0d0239979690befcc1bb43: Status 404 returned error can't find the container with id 6ebdc59081de899c4b4f26ff9729e0fac02b01c7ee0d0239979690befcc1bb43 Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.732842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" event={"ID":"c9a47262-edfc-4f90-931b-8bba58325a3c","Type":"ContainerStarted","Data":"6ebdc59081de899c4b4f26ff9729e0fac02b01c7ee0d0239979690befcc1bb43"} Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.734336 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc6d874d7-q46r7" event={"ID":"06bc0d0f-04a5-4703-97a4-6d44ccc42006","Type":"ContainerStarted","Data":"3dad22fdafa646d58a76503ba501d9d2fddc850a64f002d58089d35f2acd5c99"} Jan 30 16:19:37 crc kubenswrapper[4740]: I0130 16:19:37.910755 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-649cd9f6b8-lgj8x"] Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.011927 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:19:38 crc kubenswrapper[4740]: W0130 16:19:38.064831 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b3c286_aa27_4b55_8b05_50484d643da5.slice/crio-8e482167eecd98609a08b7a62027a3802dd60646b8b8b8a119d08eb6c8ad8ed7 WatchSource:0}: Error finding container 8e482167eecd98609a08b7a62027a3802dd60646b8b8b8a119d08eb6c8ad8ed7: Status 404 returned error can't find the container with id 8e482167eecd98609a08b7a62027a3802dd60646b8b8b8a119d08eb6c8ad8ed7 Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.519400 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7745b764-mmpkw"] Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.732396 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.757206 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lfz95" event={"ID":"29755348-1e90-4436-8a60-a2823c2804fd","Type":"ContainerDied","Data":"000b796aaa5b9161d256e3bdc502850a08f9fb81d7619ffaa0b2fe88748233d5"} Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.757257 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="000b796aaa5b9161d256e3bdc502850a08f9fb81d7619ffaa0b2fe88748233d5" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.759770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"8e482167eecd98609a08b7a62027a3802dd60646b8b8b8a119d08eb6c8ad8ed7"} Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.762642 4740 generic.go:334] "Generic (PLEG): container finished" podID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" containerID="1344ab6c54073eb0e098787b221c48c7db2b0e6b9a160e797a2cb2826f5bd461" exitCode=0 Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.762698 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pkfjm" event={"ID":"2754b498-304b-47aa-a2d3-71a9c2f70e8e","Type":"ContainerDied","Data":"1344ab6c54073eb0e098787b221c48c7db2b0e6b9a160e797a2cb2826f5bd461"} Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.767955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" event={"ID":"92b93f04-34e0-47a3-af34-cd7e7717c444","Type":"ContainerStarted","Data":"139f61c2b3f2c702304ce74f1f46b09a5df9ae130f66952a269d652ca2da0ad0"} Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.789126 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7745b764-mmpkw" event={"ID":"f4b64e71-6b99-4f78-9636-4996a1e4ecee","Type":"ContainerStarted","Data":"c5d3b161e02b1b63ee2714baa132ef93bffe11c1110a668e1ff7cc9ff3eb8a6c"} Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.822833 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.861779 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle\") pod \"29755348-1e90-4436-8a60-a2823c2804fd\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.861838 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs\") pod \"29755348-1e90-4436-8a60-a2823c2804fd\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.861881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data\") pod \"29755348-1e90-4436-8a60-a2823c2804fd\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.861961 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts\") pod \"29755348-1e90-4436-8a60-a2823c2804fd\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.862131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpt7\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7\") pod \"29755348-1e90-4436-8a60-a2823c2804fd\" (UID: \"29755348-1e90-4436-8a60-a2823c2804fd\") " Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.870031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts" (OuterVolumeSpecName: "scripts") pod "29755348-1e90-4436-8a60-a2823c2804fd" (UID: "29755348-1e90-4436-8a60-a2823c2804fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.876153 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs" (OuterVolumeSpecName: "certs") pod "29755348-1e90-4436-8a60-a2823c2804fd" (UID: "29755348-1e90-4436-8a60-a2823c2804fd"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.914044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.927643 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29755348-1e90-4436-8a60-a2823c2804fd" (UID: "29755348-1e90-4436-8a60-a2823c2804fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.927897 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7" (OuterVolumeSpecName: "kube-api-access-vwpt7") pod "29755348-1e90-4436-8a60-a2823c2804fd" (UID: "29755348-1e90-4436-8a60-a2823c2804fd"). InnerVolumeSpecName "kube-api-access-vwpt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.962679 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data" (OuterVolumeSpecName: "config-data") pod "29755348-1e90-4436-8a60-a2823c2804fd" (UID: "29755348-1e90-4436-8a60-a2823c2804fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.966205 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.966254 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.966265 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.966276 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29755348-1e90-4436-8a60-a2823c2804fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:38 crc kubenswrapper[4740]: I0130 16:19:38.966285 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwpt7\" (UniqueName: \"kubernetes.io/projected/29755348-1e90-4436-8a60-a2823c2804fd-kube-api-access-vwpt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:39 crc kubenswrapper[4740]: W0130 16:19:39.135168 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2701590d_93ff_476c_8ad7_fd118b873a3e.slice/crio-e0808ec587ab176304d03b38be5f4b051a412123383533c9828f1abfab5acdc3 WatchSource:0}: Error finding container e0808ec587ab176304d03b38be5f4b051a412123383533c9828f1abfab5acdc3: Status 404 returned error can't find the container with id e0808ec587ab176304d03b38be5f4b051a412123383533c9828f1abfab5acdc3 Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.753695 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-586b4b4677-4tdp8"] Jan 30 16:19:39 crc kubenswrapper[4740]: E0130 16:19:39.754828 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29755348-1e90-4436-8a60-a2823c2804fd" containerName="cloudkitty-db-sync" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.754845 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="29755348-1e90-4436-8a60-a2823c2804fd" containerName="cloudkitty-db-sync" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.755157 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="29755348-1e90-4436-8a60-a2823c2804fd" containerName="cloudkitty-db-sync" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.756637 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.763532 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.763815 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.778194 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-586b4b4677-4tdp8"] Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790008 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-internal-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-combined-ca-bundle\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790103 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-ovndb-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790156 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-httpd-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790230 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-public-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6pt\" (UniqueName: \"kubernetes.io/projected/4876d8e9-6662-4958-bb1a-091307ccfd02-kube-api-access-hn6pt\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.790334 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.818271 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" event={"ID":"c9a47262-edfc-4f90-931b-8bba58325a3c","Type":"ContainerStarted","Data":"58e446ea1b337c9b5fe3d788e497aa983aa04abdbb9199a38cbeaa4dddfe6f7b"} Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.818482 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" podUID="c9a47262-edfc-4f90-931b-8bba58325a3c" containerName="init" containerID="cri-o://58e446ea1b337c9b5fe3d788e497aa983aa04abdbb9199a38cbeaa4dddfe6f7b" gracePeriod=10 Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.837527 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61"} Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.843828 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerStarted","Data":"8fe44a93cf50f5871c25cf782a39a8260e9d9f29395ea64ad6d47886207cff70"} Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.857659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerStarted","Data":"e0808ec587ab176304d03b38be5f4b051a412123383533c9828f1abfab5acdc3"} Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.858036 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lfz95" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905161 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-internal-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-combined-ca-bundle\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-ovndb-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-httpd-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905529 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-public-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn6pt\" (UniqueName: \"kubernetes.io/projected/4876d8e9-6662-4958-bb1a-091307ccfd02-kube-api-access-hn6pt\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.905641 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.927187 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-ovndb-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.927829 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-internal-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.932267 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn6pt\" (UniqueName: \"kubernetes.io/projected/4876d8e9-6662-4958-bb1a-091307ccfd02-kube-api-access-hn6pt\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.933925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.942864 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-httpd-config\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.956212 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-public-tls-certs\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:39 crc kubenswrapper[4740]: I0130 16:19:39.977466 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4876d8e9-6662-4958-bb1a-091307ccfd02-combined-ca-bundle\") pod \"neutron-586b4b4677-4tdp8\" (UID: \"4876d8e9-6662-4958-bb1a-091307ccfd02\") " pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.135954 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-kfpzr"] Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.138320 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.141981 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-54bf9" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.142254 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.142492 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.142749 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.142941 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.215771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.215861 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.215883 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.215946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.215984 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.220466 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-kfpzr"] Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.317824 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.317919 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.318060 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.318122 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.318154 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.386301 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.388005 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.391706 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.395170 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.405132 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58\") pod \"cloudkitty-storageinit-kfpzr\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.812065 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.817694 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.834310 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.835805 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836622 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836623 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836750 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx24p\" (UniqueName: \"kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836892 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.836945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle\") pod \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\" (UID: \"2754b498-304b-47aa-a2d3-71a9c2f70e8e\") " Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.839991 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2754b498-304b-47aa-a2d3-71a9c2f70e8e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.854674 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts" (OuterVolumeSpecName: "scripts") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.855586 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p" (OuterVolumeSpecName: "kube-api-access-vx24p") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "kube-api-access-vx24p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.859631 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.942623 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx24p\" (UniqueName: \"kubernetes.io/projected/2754b498-304b-47aa-a2d3-71a9c2f70e8e-kube-api-access-vx24p\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.942684 4740 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.942701 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.954964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7745b764-mmpkw" event={"ID":"f4b64e71-6b99-4f78-9636-4996a1e4ecee","Type":"ContainerStarted","Data":"7a42186db3e739aed80661a115a9ef4e85b68408350c5461a7547abaa2691449"} Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.967386 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerStarted","Data":"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45"} Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.985841 4740 generic.go:334] "Generic (PLEG): container finished" podID="c9a47262-edfc-4f90-931b-8bba58325a3c" containerID="58e446ea1b337c9b5fe3d788e497aa983aa04abdbb9199a38cbeaa4dddfe6f7b" exitCode=0 Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.985943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" event={"ID":"c9a47262-edfc-4f90-931b-8bba58325a3c","Type":"ContainerDied","Data":"58e446ea1b337c9b5fe3d788e497aa983aa04abdbb9199a38cbeaa4dddfe6f7b"} Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.997915 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pkfjm" event={"ID":"2754b498-304b-47aa-a2d3-71a9c2f70e8e","Type":"ContainerDied","Data":"4003629e0a0e9cad413ce410852bfb84947413140ecbb55ea4655ea2ca304ef3"} Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.997989 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4003629e0a0e9cad413ce410852bfb84947413140ecbb55ea4655ea2ca304ef3" Jan 30 16:19:40 crc kubenswrapper[4740]: I0130 16:19:40.998067 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pkfjm" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.035185 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:19:41 crc kubenswrapper[4740]: E0130 16:19:41.041118 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" containerName="cinder-db-sync" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.041162 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" containerName="cinder-db-sync" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.041399 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" containerName="cinder-db-sync" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.048674 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.068246 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.167508 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.234613 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data" (OuterVolumeSpecName: "config-data") pod "2754b498-304b-47aa-a2d3-71a9c2f70e8e" (UID: "2754b498-304b-47aa-a2d3-71a9c2f70e8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.256759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.256866 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.257109 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gg4\" (UniqueName: \"kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.257700 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.257723 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2754b498-304b-47aa-a2d3-71a9c2f70e8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.367138 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.367215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.367277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2gg4\" (UniqueName: \"kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.368332 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.368630 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.394485 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2gg4\" (UniqueName: \"kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4\") pod \"redhat-operators-jz9qh\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.497742 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:19:41 crc kubenswrapper[4740]: W0130 16:19:41.778498 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4876d8e9_6662_4958_bb1a_091307ccfd02.slice/crio-3389dc08a65f6852ec33ed16e5acf686d46e65950672357aef2dbad4ce300c7c WatchSource:0}: Error finding container 3389dc08a65f6852ec33ed16e5acf686d46e65950672357aef2dbad4ce300c7c: Status 404 returned error can't find the container with id 3389dc08a65f6852ec33ed16e5acf686d46e65950672357aef2dbad4ce300c7c Jan 30 16:19:41 crc kubenswrapper[4740]: W0130 16:19:41.795980 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfec9300e_65b7_42ea_abac_2de63aaa9616.slice/crio-9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66 WatchSource:0}: Error finding container 9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66: Status 404 returned error can't find the container with id 9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66 Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.806897 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-kfpzr"] Jan 30 16:19:41 crc kubenswrapper[4740]: I0130 16:19:41.823419 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-586b4b4677-4tdp8"] Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.010649 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-kfpzr" event={"ID":"fec9300e-65b7-42ea-abac-2de63aaa9616","Type":"ContainerStarted","Data":"9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66"} Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.015968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerStarted","Data":"3767dfb1bc1c174bdaa211f96e02f18815d97b40333c7605e81324ac1147a3fc"} Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.022529 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586b4b4677-4tdp8" event={"ID":"4876d8e9-6662-4958-bb1a-091307ccfd02","Type":"ContainerStarted","Data":"3389dc08a65f6852ec33ed16e5acf686d46e65950672357aef2dbad4ce300c7c"} Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.121861 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.311903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.312104 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.312129 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xf8f\" (UniqueName: \"kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.312156 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.312269 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.312318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb\") pod \"c9a47262-edfc-4f90-931b-8bba58325a3c\" (UID: \"c9a47262-edfc-4f90-931b-8bba58325a3c\") " Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.356774 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f" (OuterVolumeSpecName: "kube-api-access-5xf8f") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "kube-api-access-5xf8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.477218 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xf8f\" (UniqueName: \"kubernetes.io/projected/c9a47262-edfc-4f90-931b-8bba58325a3c-kube-api-access-5xf8f\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.578191 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.594026 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.617832 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.638917 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:19:42 crc kubenswrapper[4740]: E0130 16:19:42.640340 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a47262-edfc-4f90-931b-8bba58325a3c" containerName="init" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.640391 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a47262-edfc-4f90-931b-8bba58325a3c" containerName="init" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.640855 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a47262-edfc-4f90-931b-8bba58325a3c" containerName="init" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.642800 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config" (OuterVolumeSpecName: "config") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.670587 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.674078 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.692569 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.698364 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.698734 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kn5jt" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.698938 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.699096 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.716389 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9a47262-edfc-4f90-931b-8bba58325a3c" (UID: "c9a47262-edfc-4f90-931b-8bba58325a3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.722076 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.722132 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.722151 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.722160 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9a47262-edfc-4f90-931b-8bba58325a3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.789080 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.810650 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.824500 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827794 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827797 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827860 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827948 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.827976 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.828001 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n82qm\" (UniqueName: \"kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.952812 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.953820 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.955772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.955819 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.955915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.955952 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n82qm\" (UniqueName: \"kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.955991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lgh\" (UniqueName: \"kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.956095 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.956324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.960475 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.960699 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.960747 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.966087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.981287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:42 crc kubenswrapper[4740]: I0130 16:19:42.984076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:42.987914 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.005029 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.013875 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.038028 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n82qm\" (UniqueName: \"kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm\") pod \"cinder-scheduler-0\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " pod="openstack/cinder-scheduler-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067217 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lgh\" (UniqueName: \"kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.067376 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.068909 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.070153 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.070675 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.071044 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.078695 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.110748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerStarted","Data":"9d7e160093f7dc44e5df114b3e2f9be85f1095438abaff341b28a40736f736c9"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.111784 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.130848 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lgh\" (UniqueName: \"kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh\") pod \"dnsmasq-dns-6578955fd5-lh2fh\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.134570 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" event={"ID":"c9a47262-edfc-4f90-931b-8bba58325a3c","Type":"ContainerDied","Data":"6ebdc59081de899c4b4f26ff9729e0fac02b01c7ee0d0239979690befcc1bb43"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.134653 4740 scope.go:117] "RemoveContainer" containerID="58e446ea1b337c9b5fe3d788e497aa983aa04abdbb9199a38cbeaa4dddfe6f7b" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.134902 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-fcbp6" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.200464 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.203041 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"9a827ab722b1667e469b45df39acb6aaefda1b108ff063fd32c065181524d0ad"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.203191 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.203613 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.203699 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.210908 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.211833 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.236958 4740 generic.go:334] "Generic (PLEG): container finished" podID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerID="3767dfb1bc1c174bdaa211f96e02f18815d97b40333c7605e81324ac1147a3fc" exitCode=0 Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.237113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerDied","Data":"3767dfb1bc1c174bdaa211f96e02f18815d97b40333c7605e81324ac1147a3fc"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.244423 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.280907 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.281258 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.281338 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.281480 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.281690 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.281987 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.282241 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwcw\" (UniqueName: \"kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.312748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7745b764-mmpkw" event={"ID":"f4b64e71-6b99-4f78-9636-4996a1e4ecee","Type":"ContainerStarted","Data":"3471d91952898368ffcff76e6ba6d9253b64af386943dafe85773a93758cfae5"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.313638 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.313883 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.331535 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerStarted","Data":"a516247ae58cdd279248126ea9d60ea880a54e7452c327c0ffb4a84bd4f2ca86"} Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.356167 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64bd6c9fd8-9p6nz" podStartSLOduration=7.356140429 podStartE2EDuration="7.356140429s" podCreationTimestamp="2026-01-30 16:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:43.164903947 +0000 UTC m=+1431.801966556" watchObservedRunningTime="2026-01-30 16:19:43.356140429 +0000 UTC m=+1431.993203028" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.373253 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-kfpzr" podStartSLOduration=3.373226763 podStartE2EDuration="3.373226763s" podCreationTimestamp="2026-01-30 16:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:43.22349401 +0000 UTC m=+1431.860556609" watchObservedRunningTime="2026-01-30 16:19:43.373226763 +0000 UTC m=+1432.010289362" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.385650 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.385752 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.385872 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nwcw\" (UniqueName: \"kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.386001 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.386027 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.386046 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.386065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.387915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.392751 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bf9587c4-75g67" podStartSLOduration=7.392720976 podStartE2EDuration="7.392720976s" podCreationTimestamp="2026-01-30 16:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:43.263962593 +0000 UTC m=+1431.901025192" watchObservedRunningTime="2026-01-30 16:19:43.392720976 +0000 UTC m=+1432.029783575" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.414673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.414752 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.427374 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.428269 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.454093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nwcw\" (UniqueName: \"kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.458519 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.469865 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data\") pod \"cinder-api-0\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.601296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.638219 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.669663 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c9f844546-g6v8p"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.672453 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.677585 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.677883 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.704191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-combined-ca-bundle\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.707758 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-public-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.708517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c8ace4b-028d-45a5-af9d-360781681219-logs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.716773 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-fcbp6"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.724534 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.724661 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-internal-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.724694 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrdd2\" (UniqueName: \"kubernetes.io/projected/6c8ace4b-028d-45a5-af9d-360781681219-kube-api-access-wrdd2\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.724807 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data-custom\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.768238 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c9f844546-g6v8p"] Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.770564 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7745b764-mmpkw" podStartSLOduration=7.770536295 podStartE2EDuration="7.770536295s" podCreationTimestamp="2026-01-30 16:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:43.47310874 +0000 UTC m=+1432.110171339" watchObservedRunningTime="2026-01-30 16:19:43.770536295 +0000 UTC m=+1432.407598894" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830520 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data-custom\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830622 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-combined-ca-bundle\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-public-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830776 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c8ace4b-028d-45a5-af9d-360781681219-logs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830921 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.830973 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-internal-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.831006 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrdd2\" (UniqueName: \"kubernetes.io/projected/6c8ace4b-028d-45a5-af9d-360781681219-kube-api-access-wrdd2\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.865154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c8ace4b-028d-45a5-af9d-360781681219-logs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.935968 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data-custom\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.937593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-config-data\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.951599 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-combined-ca-bundle\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.957937 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-internal-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.969368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c8ace4b-028d-45a5-af9d-360781681219-public-tls-certs\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:43 crc kubenswrapper[4740]: I0130 16:19:43.976928 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrdd2\" (UniqueName: \"kubernetes.io/projected/6c8ace4b-028d-45a5-af9d-360781681219-kube-api-access-wrdd2\") pod \"barbican-api-c9f844546-g6v8p\" (UID: \"6c8ace4b-028d-45a5-af9d-360781681219\") " pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.101228 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.355032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-kfpzr" event={"ID":"fec9300e-65b7-42ea-abac-2de63aaa9616","Type":"ContainerStarted","Data":"ea2f769dfd823e14a3025458f20e3c3d13cbb63154a3e9ccf061e87d655e2f7a"} Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.379301 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586b4b4677-4tdp8" event={"ID":"4876d8e9-6662-4958-bb1a-091307ccfd02","Type":"ContainerStarted","Data":"3de64366e3cb0aafeb53a7bb8ac1d0302566cc044697ad28d4de96f1c70a56a9"} Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.403835 4740 generic.go:334] "Generic (PLEG): container finished" podID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerID="fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af" exitCode=0 Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.403970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerDied","Data":"fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af"} Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.463265 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.534032 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.654674 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:19:44 crc kubenswrapper[4740]: I0130 16:19:44.882515 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.098289 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c9f844546-g6v8p"] Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.368553 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a47262-edfc-4f90-931b-8bba58325a3c" path="/var/lib/kubelet/pods/c9a47262-edfc-4f90-931b-8bba58325a3c/volumes" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.486566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c9f844546-g6v8p" event={"ID":"6c8ace4b-028d-45a5-af9d-360781681219","Type":"ContainerStarted","Data":"0cab28e1ef6393c67c7a82228f10933dbf356898d858b317b43b575deca5b80b"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.497094 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586b4b4677-4tdp8" event={"ID":"4876d8e9-6662-4958-bb1a-091307ccfd02","Type":"ContainerStarted","Data":"6535fa1ff106b544a1c88ad1df2aec0f5fed30b6a737ca0885074260d3e040c8"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.498974 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.520198 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/0.log" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.524195 4740 generic.go:334] "Generic (PLEG): container finished" podID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerID="9d7e160093f7dc44e5df114b3e2f9be85f1095438abaff341b28a40736f736c9" exitCode=1 Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.524321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"9d7e160093f7dc44e5df114b3e2f9be85f1095438abaff341b28a40736f736c9"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.525367 4740 scope.go:117] "RemoveContainer" containerID="9d7e160093f7dc44e5df114b3e2f9be85f1095438abaff341b28a40736f736c9" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.532772 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" event={"ID":"996fb134-f1a9-45ba-bdec-62c17b1fa428","Type":"ContainerStarted","Data":"8aa5b34b638c5e92ed114a274d74432becf35f1059d5c252a0a11a979641ece2"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.538496 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerStarted","Data":"e182b2024cb3b615a4ab7ad32fe6dfcc31fe9666b5c0ff951ccdffa55466e2bc"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.554924 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-586b4b4677-4tdp8" podStartSLOduration=6.554899402 podStartE2EDuration="6.554899402s" podCreationTimestamp="2026-01-30 16:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:45.54714425 +0000 UTC m=+1434.184206859" watchObservedRunningTime="2026-01-30 16:19:45.554899402 +0000 UTC m=+1434.191962001" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.568231 4740 generic.go:334] "Generic (PLEG): container finished" podID="09b3c286-aa27-4b55-8b05-50484d643da5" containerID="9a827ab722b1667e469b45df39acb6aaefda1b108ff063fd32c065181524d0ad" exitCode=1 Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.570151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"9a827ab722b1667e469b45df39acb6aaefda1b108ff063fd32c065181524d0ad"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.577600 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.584952 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerStarted","Data":"4acce566de5e0f83432891eb77f4fa5797dff0005512faf2e3fda9bdecdbbcaa"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.585204 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="dnsmasq-dns" containerID="cri-o://4acce566de5e0f83432891eb77f4fa5797dff0005512faf2e3fda9bdecdbbcaa" gracePeriod=10 Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.585347 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.590514 4740 scope.go:117] "RemoveContainer" containerID="9a827ab722b1667e469b45df39acb6aaefda1b108ff063fd32c065181524d0ad" Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.605847 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerStarted","Data":"8ece6ccc339a2a3bc018458fda225e7396bbcdf30edd344756c7afb07938807f"} Jan 30 16:19:45 crc kubenswrapper[4740]: I0130 16:19:45.665504 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" podStartSLOduration=9.665473764 podStartE2EDuration="9.665473764s" podCreationTimestamp="2026-01-30 16:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:45.640052904 +0000 UTC m=+1434.277115503" watchObservedRunningTime="2026-01-30 16:19:45.665473764 +0000 UTC m=+1434.302536363" Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.266562 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.267961 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.649507 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c9f844546-g6v8p" event={"ID":"6c8ace4b-028d-45a5-af9d-360781681219","Type":"ContainerStarted","Data":"81dcc7db81ce6cc4dc05eced9c48ceb5147779ee49e0191942fdf2584ce3a4e2"} Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.664041 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerStarted","Data":"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173"} Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.667560 4740 generic.go:334] "Generic (PLEG): container finished" podID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerID="3326a04848e0479fc6c3a30bc06e71364c5157358fe3c86edd956ec0228a4a1f" exitCode=0 Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.667666 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" event={"ID":"996fb134-f1a9-45ba-bdec-62c17b1fa428","Type":"ContainerDied","Data":"3326a04848e0479fc6c3a30bc06e71364c5157358fe3c86edd956ec0228a4a1f"} Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.674246 4740 generic.go:334] "Generic (PLEG): container finished" podID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerID="4acce566de5e0f83432891eb77f4fa5797dff0005512faf2e3fda9bdecdbbcaa" exitCode=0 Jan 30 16:19:46 crc kubenswrapper[4740]: I0130 16:19:46.675545 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerDied","Data":"4acce566de5e0f83432891eb77f4fa5797dff0005512faf2e3fda9bdecdbbcaa"} Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.267396 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.276152 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.508095 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.626613 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.626682 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.626786 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.626841 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.627078 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn6nn\" (UniqueName: \"kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.627143 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb\") pod \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\" (UID: \"221ff40a-c66f-4ddc-87b9-e7d10732b89e\") " Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.650519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn" (OuterVolumeSpecName: "kube-api-access-zn6nn") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "kube-api-access-zn6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.701186 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" event={"ID":"221ff40a-c66f-4ddc-87b9-e7d10732b89e","Type":"ContainerDied","Data":"8fe44a93cf50f5871c25cf782a39a8260e9d9f29395ea64ad6d47886207cff70"} Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.701274 4740 scope.go:117] "RemoveContainer" containerID="4acce566de5e0f83432891eb77f4fa5797dff0005512faf2e3fda9bdecdbbcaa" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.701502 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-wlggn" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.712476 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerStarted","Data":"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77"} Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.713184 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.716459 4740 generic.go:334] "Generic (PLEG): container finished" podID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerID="d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173" exitCode=0 Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.716542 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerDied","Data":"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173"} Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.731621 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn6nn\" (UniqueName: \"kubernetes.io/projected/221ff40a-c66f-4ddc-87b9-e7d10732b89e-kube-api-access-zn6nn\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.731706 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.735914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config" (OuterVolumeSpecName: "config") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.762294 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.771436 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.782550 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "221ff40a-c66f-4ddc-87b9-e7d10732b89e" (UID: "221ff40a-c66f-4ddc-87b9-e7d10732b89e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.834910 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.834984 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.835006 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:47 crc kubenswrapper[4740]: I0130 16:19:47.835018 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/221ff40a-c66f-4ddc-87b9-e7d10732b89e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:48 crc kubenswrapper[4740]: I0130 16:19:48.002284 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/0.log" Jan 30 16:19:48 crc kubenswrapper[4740]: I0130 16:19:48.007894 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerStarted","Data":"25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196"} Jan 30 16:19:48 crc kubenswrapper[4740]: I0130 16:19:48.008969 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:19:48 crc kubenswrapper[4740]: I0130 16:19:48.056850 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:48 crc kubenswrapper[4740]: I0130 16:19:48.067451 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-wlggn"] Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.018513 4740 scope.go:117] "RemoveContainer" containerID="3767dfb1bc1c174bdaa211f96e02f18815d97b40333c7605e81324ac1147a3fc" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.048021 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/1.log" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.049063 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/0.log" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.052554 4740 generic.go:334] "Generic (PLEG): container finished" podID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerID="25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196" exitCode=1 Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.052616 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196"} Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.053705 4740 scope.go:117] "RemoveContainer" containerID="25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196" Jan 30 16:19:49 crc kubenswrapper[4740]: E0130 16:19:49.054177 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.269912 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.500959 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" path="/var/lib/kubelet/pods/221ff40a-c66f-4ddc-87b9-e7d10732b89e/volumes" Jan 30 16:19:49 crc kubenswrapper[4740]: I0130 16:19:49.839514 4740 scope.go:117] "RemoveContainer" containerID="9d7e160093f7dc44e5df114b3e2f9be85f1095438abaff341b28a40736f736c9" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.091174 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" event={"ID":"996fb134-f1a9-45ba-bdec-62c17b1fa428","Type":"ContainerStarted","Data":"cdc41b1425dec44f3c04cd49dcadecacfb349cf9f836dcf649e34dc43f7ec2c0"} Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.091674 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.096830 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb"} Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.097715 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.099095 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.126601 4740 scope.go:117] "RemoveContainer" containerID="25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196" Jan 30 16:19:50 crc kubenswrapper[4740]: E0130 16:19:50.126948 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.127123 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c9f844546-g6v8p" event={"ID":"6c8ace4b-028d-45a5-af9d-360781681219","Type":"ContainerStarted","Data":"73ffb917c3eaa15e0844d961893ebc6eb2de2b8bb7959b9f339c6b539c27bbc1"} Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.127431 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.129024 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" podStartSLOduration=8.128985348 podStartE2EDuration="8.128985348s" podCreationTimestamp="2026-01-30 16:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:50.126697462 +0000 UTC m=+1438.763760061" watchObservedRunningTime="2026-01-30 16:19:50.128985348 +0000 UTC m=+1438.766047947" Jan 30 16:19:50 crc kubenswrapper[4740]: I0130 16:19:50.242719 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c9f844546-g6v8p" podStartSLOduration=7.242691248 podStartE2EDuration="7.242691248s" podCreationTimestamp="2026-01-30 16:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:50.213477593 +0000 UTC m=+1438.850540192" watchObservedRunningTime="2026-01-30 16:19:50.242691248 +0000 UTC m=+1438.879753857" Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.148691 4740 generic.go:334] "Generic (PLEG): container finished" podID="09b3c286-aa27-4b55-8b05-50484d643da5" containerID="0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb" exitCode=1 Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.148834 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb"} Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.149580 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.152202 4740 scope.go:117] "RemoveContainer" containerID="0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb" Jan 30 16:19:51 crc kubenswrapper[4740]: E0130 16:19:51.154287 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-bf9587c4-75g67_openstack(09b3c286-aa27-4b55-8b05-50484d643da5)\"" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.158566 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api-log" containerID="cri-o://3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77" gracePeriod=30 Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.158739 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" containerID="cri-o://2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309" gracePeriod=30 Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.158829 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerStarted","Data":"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309"} Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.158907 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.159152 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:51 crc kubenswrapper[4740]: I0130 16:19:51.207003 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.206976279 podStartE2EDuration="9.206976279s" podCreationTimestamp="2026-01-30 16:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:19:51.194692924 +0000 UTC m=+1439.831755523" watchObservedRunningTime="2026-01-30 16:19:51.206976279 +0000 UTC m=+1439.844038878" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.171839 4740 generic.go:334] "Generic (PLEG): container finished" podID="1fd6a6d5-372b-412e-b528-c1329736b727" containerID="3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77" exitCode=143 Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.171928 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerDied","Data":"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77"} Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.172071 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.172901 4740 scope.go:117] "RemoveContainer" containerID="0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.173064 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:52 crc kubenswrapper[4740]: E0130 16:19:52.173264 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-bf9587c4-75g67_openstack(09b3c286-aa27-4b55-8b05-50484d643da5)\"" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.266565 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.267299 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.267372 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:52 crc kubenswrapper[4740]: I0130 16:19:52.267472 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.201759 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.201940 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api-log" containerStatusID={"Type":"cri-o","ID":"70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61"} pod="openstack/barbican-api-bf9587c4-75g67" containerMessage="Container barbican-api-log failed liveness probe, will be restarted" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.202315 4740 scope.go:117] "RemoveContainer" containerID="0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.202376 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" containerID="cri-o://70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61" gracePeriod=30 Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.405428 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.405763 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.417031 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.417164 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.417861 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:19:53 crc kubenswrapper[4740]: I0130 16:19:53.417933 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:19:54 crc kubenswrapper[4740]: I0130 16:19:54.222430 4740 generic.go:334] "Generic (PLEG): container finished" podID="fec9300e-65b7-42ea-abac-2de63aaa9616" containerID="ea2f769dfd823e14a3025458f20e3c3d13cbb63154a3e9ccf061e87d655e2f7a" exitCode=0 Jan 30 16:19:54 crc kubenswrapper[4740]: I0130 16:19:54.223603 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-kfpzr" event={"ID":"fec9300e-65b7-42ea-abac-2de63aaa9616","Type":"ContainerDied","Data":"ea2f769dfd823e14a3025458f20e3c3d13cbb63154a3e9ccf061e87d655e2f7a"} Jan 30 16:19:54 crc kubenswrapper[4740]: I0130 16:19:54.232838 4740 generic.go:334] "Generic (PLEG): container finished" podID="09b3c286-aa27-4b55-8b05-50484d643da5" containerID="70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61" exitCode=143 Jan 30 16:19:54 crc kubenswrapper[4740]: I0130 16:19:54.234282 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61"} Jan 30 16:19:55 crc kubenswrapper[4740]: I0130 16:19:55.136655 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-c9f844546-g6v8p" podUID="6c8ace4b-028d-45a5-af9d-360781681219" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.187:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:19:56 crc kubenswrapper[4740]: I0130 16:19:56.268226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerStarted","Data":"ccdc04bd64a8a05abc7f0e79b1f0ea2c126cec566db039bc6fcd517d51850bb4"} Jan 30 16:19:56 crc kubenswrapper[4740]: I0130 16:19:56.824184 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:56 crc kubenswrapper[4740]: I0130 16:19:56.824331 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:19:57 crc kubenswrapper[4740]: I0130 16:19:57.050332 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c9f844546-g6v8p" Jan 30 16:19:57 crc kubenswrapper[4740]: I0130 16:19:57.272537 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:19:57 crc kubenswrapper[4740]: I0130 16:19:57.273558 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:19:57 crc kubenswrapper[4740]: I0130 16:19:57.858783 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.027697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle\") pod \"fec9300e-65b7-42ea-abac-2de63aaa9616\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.027891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs\") pod \"fec9300e-65b7-42ea-abac-2de63aaa9616\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.027939 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58\") pod \"fec9300e-65b7-42ea-abac-2de63aaa9616\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.028004 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data\") pod \"fec9300e-65b7-42ea-abac-2de63aaa9616\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.028093 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts\") pod \"fec9300e-65b7-42ea-abac-2de63aaa9616\" (UID: \"fec9300e-65b7-42ea-abac-2de63aaa9616\") " Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.036554 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs" (OuterVolumeSpecName: "certs") pod "fec9300e-65b7-42ea-abac-2de63aaa9616" (UID: "fec9300e-65b7-42ea-abac-2de63aaa9616"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.052593 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58" (OuterVolumeSpecName: "kube-api-access-7pc58") pod "fec9300e-65b7-42ea-abac-2de63aaa9616" (UID: "fec9300e-65b7-42ea-abac-2de63aaa9616"). InnerVolumeSpecName "kube-api-access-7pc58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.054597 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts" (OuterVolumeSpecName: "scripts") pod "fec9300e-65b7-42ea-abac-2de63aaa9616" (UID: "fec9300e-65b7-42ea-abac-2de63aaa9616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.064745 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data" (OuterVolumeSpecName: "config-data") pod "fec9300e-65b7-42ea-abac-2de63aaa9616" (UID: "fec9300e-65b7-42ea-abac-2de63aaa9616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.074371 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fec9300e-65b7-42ea-abac-2de63aaa9616" (UID: "fec9300e-65b7-42ea-abac-2de63aaa9616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.131460 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.131514 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pc58\" (UniqueName: \"kubernetes.io/projected/fec9300e-65b7-42ea-abac-2de63aaa9616-kube-api-access-7pc58\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.131534 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.131546 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.131559 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec9300e-65b7-42ea-abac-2de63aaa9616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.247671 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.323389 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.323754 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="dnsmasq-dns" containerID="cri-o://2cc7fe78c8ee9c4350e085f05da149ad6bbbdebfefa0da3f35df103e4d9ab8cd" gracePeriod=10 Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.375662 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-kfpzr" event={"ID":"fec9300e-65b7-42ea-abac-2de63aaa9616","Type":"ContainerDied","Data":"9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66"} Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.375717 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9442ca95b20f64db86cefa93b8091338f02c8465d077d10cd935dc3222304f66" Jan 30 16:19:58 crc kubenswrapper[4740]: I0130 16:19:58.375752 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-kfpzr" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.151899 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:19:59 crc kubenswrapper[4740]: E0130 16:19:59.152876 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="dnsmasq-dns" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.152901 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="dnsmasq-dns" Jan 30 16:19:59 crc kubenswrapper[4740]: E0130 16:19:59.152944 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec9300e-65b7-42ea-abac-2de63aaa9616" containerName="cloudkitty-storageinit" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.152953 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec9300e-65b7-42ea-abac-2de63aaa9616" containerName="cloudkitty-storageinit" Jan 30 16:19:59 crc kubenswrapper[4740]: E0130 16:19:59.152985 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="init" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.152992 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="init" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.153228 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="221ff40a-c66f-4ddc-87b9-e7d10732b89e" containerName="dnsmasq-dns" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.153282 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec9300e-65b7-42ea-abac-2de63aaa9616" containerName="cloudkitty-storageinit" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.163577 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.171955 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.172187 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.172378 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.175684 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-54bf9" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.198786 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.218042 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.230748 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.233162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.247329 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.265791 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.265859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfsj\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.265898 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.265966 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.266058 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.266118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368807 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368883 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368916 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfsj\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.368953 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369175 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369211 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369287 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.369319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45fh\" (UniqueName: \"kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.382468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.383223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.383971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.391069 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.407170 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.412087 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfsj\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj\") pod \"cloudkitty-proc-0\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.435572 4740 generic.go:334] "Generic (PLEG): container finished" podID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerID="2cc7fe78c8ee9c4350e085f05da149ad6bbbdebfefa0da3f35df103e4d9ab8cd" exitCode=0 Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.435628 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" event={"ID":"e2398e05-2c84-4851-922a-3e6a7c9e3994","Type":"ContainerDied","Data":"2cc7fe78c8ee9c4350e085f05da149ad6bbbdebfefa0da3f35df103e4d9ab8cd"} Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471483 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45fh\" (UniqueName: \"kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471612 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471645 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471776 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.471808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.472748 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.474906 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.479439 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.480149 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.487708 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.498156 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45fh\" (UniqueName: \"kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh\") pod \"dnsmasq-dns-58bd69657f-qglcl\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.500715 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.564026 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.631616 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.634073 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.640102 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.653256 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791417 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791493 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791594 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791682 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791792 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5tv\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.791914 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.896252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.897920 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898017 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898072 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898182 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5tv\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898282 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.898941 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.912433 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.918408 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.919099 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.924004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.930049 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:19:59 crc kubenswrapper[4740]: I0130 16:19:59.943551 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5tv\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv\") pod \"cloudkitty-api-0\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:00 crc kubenswrapper[4740]: I0130 16:20:00.002279 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:20:00 crc kubenswrapper[4740]: I0130 16:20:00.537816 4740 scope.go:117] "RemoveContainer" containerID="9a827ab722b1667e469b45df39acb6aaefda1b108ff063fd32c065181524d0ad" Jan 30 16:20:00 crc kubenswrapper[4740]: E0130 16:20:00.554955 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 30 16:20:00 crc kubenswrapper[4740]: E0130 16:20:00.555203 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mkq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(95fa343f-47ce-425f-a254-58264f0a3f6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:20:00 crc kubenswrapper[4740]: E0130 16:20:00.556673 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1268691984/1\\\": happened during read: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.358966 4740 scope.go:117] "RemoveContainer" containerID="25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.393283 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507033 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507300 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7rd\" (UniqueName: \"kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507376 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507481 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507524 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.507571 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb\") pod \"e2398e05-2c84-4851-922a-3e6a7c9e3994\" (UID: \"e2398e05-2c84-4851-922a-3e6a7c9e3994\") " Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.516053 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" event={"ID":"e2398e05-2c84-4851-922a-3e6a7c9e3994","Type":"ContainerDied","Data":"cec197cc74331c524124cf1457a2e1c116fca3223331dcfe091c0cfb8f1929ff"} Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.516161 4740 scope.go:117] "RemoveContainer" containerID="2cc7fe78c8ee9c4350e085f05da149ad6bbbdebfefa0da3f35df103e4d9ab8cd" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.516438 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.540682 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd" (OuterVolumeSpecName: "kube-api-access-vl7rd") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "kube-api-access-vl7rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.618884 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7rd\" (UniqueName: \"kubernetes.io/projected/e2398e05-2c84-4851-922a-3e6a7c9e3994-kube-api-access-vl7rd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.651765 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc6d874d7-q46r7" event={"ID":"06bc0d0f-04a5-4703-97a4-6d44ccc42006","Type":"ContainerStarted","Data":"426c4ffa7d4d45c0198dfd0682fec306951917cd418960d7b35ec11154a2b053"} Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.693525 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.696513 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.703030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"1f6d81e5a78f35608233e14df2c88c5163b48bf8f79d11f80736c5923c93a629"} Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.709502 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" event={"ID":"92b93f04-34e0-47a3-af34-cd7e7717c444","Type":"ContainerStarted","Data":"69047e82eba8bc9a2c734764f7159dfd688f9a008cf5967d042fe51e326b08bc"} Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.716446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerStarted","Data":"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756"} Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.722474 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.726702 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/1.log" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.727984 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="ceilometer-central-agent" containerID="cri-o://18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340" gracePeriod=30 Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.728564 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="sg-core" containerID="cri-o://5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4" gracePeriod=30 Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.754075 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config" (OuterVolumeSpecName: "config") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.790652 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jz9qh" podStartSLOduration=3.479876025 podStartE2EDuration="20.790629237s" podCreationTimestamp="2026-01-30 16:19:41 +0000 UTC" firstStartedPulling="2026-01-30 16:19:43.377991841 +0000 UTC m=+1432.015054440" lastFinishedPulling="2026-01-30 16:20:00.688745053 +0000 UTC m=+1449.325807652" observedRunningTime="2026-01-30 16:20:01.754644915 +0000 UTC m=+1450.391707514" watchObservedRunningTime="2026-01-30 16:20:01.790629237 +0000 UTC m=+1450.427691836" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.801276 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.810650 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.846760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2398e05-2c84-4851-922a-3e6a7c9e3994" (UID: "e2398e05-2c84-4851-922a-3e6a7c9e3994"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.852662 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.852701 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.852712 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:01 crc kubenswrapper[4740]: I0130 16:20:01.852721 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2398e05-2c84-4851-922a-3e6a7c9e3994-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.059852 4740 scope.go:117] "RemoveContainer" containerID="ba97843e560710ef3a46741da6387fb3e1a32c449c73e975819dbf7ceac2c379" Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.167632 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.203879 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.223514 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.271060 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": dial tcp 10.217.0.178:9311: connect: connection refused" Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.322213 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p77nv"] Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.812948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" event={"ID":"eca56045-0ade-4faf-b0a2-17a4702c1fd8","Type":"ContainerStarted","Data":"4381fcd1746ba81790814fae31b7d453ab06593e7f4ea4d2f4bddf0dcebe8c8a"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.840543 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" event={"ID":"92b93f04-34e0-47a3-af34-cd7e7717c444","Type":"ContainerStarted","Data":"9c6cbc03cc857fb3796409cf24f57e544db3c7d5f6df9a6a3165ed0fb321a6c6"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.858368 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerStarted","Data":"a8ed13bfd769619a43889a60e55c7b8ebba6ba0d9f67356d707a259e919fef5f"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.906257 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-649cd9f6b8-lgj8x" podStartSLOduration=5.327396483 podStartE2EDuration="27.90622564s" podCreationTimestamp="2026-01-30 16:19:35 +0000 UTC" firstStartedPulling="2026-01-30 16:19:37.975071772 +0000 UTC m=+1426.612134371" lastFinishedPulling="2026-01-30 16:20:00.553900929 +0000 UTC m=+1449.190963528" observedRunningTime="2026-01-30 16:20:02.878947214 +0000 UTC m=+1451.516009813" watchObservedRunningTime="2026-01-30 16:20:02.90622564 +0000 UTC m=+1451.543288239" Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.921884 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc6d874d7-q46r7" event={"ID":"06bc0d0f-04a5-4703-97a4-6d44ccc42006","Type":"ContainerStarted","Data":"00956c5e2d70539f81ec89715d3b08a9a36a375454bec361e8d22c1f72cec5d9"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.951630 4740 generic.go:334] "Generic (PLEG): container finished" podID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerID="5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4" exitCode=2 Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.951757 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerDied","Data":"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.963695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"eba7c81b-ae84-4672-9108-001326602860","Type":"ContainerStarted","Data":"309f023053989adce89ffe697f8628d7921ef6287849b24ec77f24c94ceb5650"} Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.995870 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/1.log" Jan 30 16:20:02 crc kubenswrapper[4740]: I0130 16:20:02.998282 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=19.6404021 podStartE2EDuration="20.998251592s" podCreationTimestamp="2026-01-30 16:19:42 +0000 UTC" firstStartedPulling="2026-01-30 16:19:44.816789829 +0000 UTC m=+1433.453852428" lastFinishedPulling="2026-01-30 16:19:46.174639311 +0000 UTC m=+1434.811701920" observedRunningTime="2026-01-30 16:20:02.9731466 +0000 UTC m=+1451.610209199" watchObservedRunningTime="2026-01-30 16:20:02.998251592 +0000 UTC m=+1451.635314191" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.011835 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerStarted","Data":"0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b"} Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.013281 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.030710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerStarted","Data":"56310f77a9d8b61071bc359c8f5910d9123931a0d47aac44531ca0cc78325022"} Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.030776 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerStarted","Data":"72365e429227090f6fcbde06e702e438b1e8bc0c02e2a7e0df62a9fded872769"} Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.064987 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.071675 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" containerID="cri-o://1f6d81e5a78f35608233e14df2c88c5163b48bf8f79d11f80736c5923c93a629" gracePeriod=30 Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.071897 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerStarted","Data":"a63d29a8ebcc08e30800005b156a280a09f7177a709e0995439303d78954c1cf"} Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.071969 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.071996 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.072590 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bf9587c4-75g67" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" containerID="cri-o://a63d29a8ebcc08e30800005b156a280a09f7177a709e0995439303d78954c1cf" gracePeriod=30 Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.097662 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7cc6d874d7-q46r7" podStartSLOduration=4.960519096 podStartE2EDuration="28.097638327s" podCreationTimestamp="2026-01-30 16:19:35 +0000 UTC" firstStartedPulling="2026-01-30 16:19:37.357000845 +0000 UTC m=+1425.994063444" lastFinishedPulling="2026-01-30 16:20:00.494120086 +0000 UTC m=+1449.131182675" observedRunningTime="2026-01-30 16:20:03.020993426 +0000 UTC m=+1451.658056025" watchObservedRunningTime="2026-01-30 16:20:03.097638327 +0000 UTC m=+1451.734700926" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.216916 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.221508 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.184:8080/\": dial tcp 10.217.0.184:8080: connect: connection refused" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.402863 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" path="/var/lib/kubelet/pods/e2398e05-2c84-4851-922a-3e6a7c9e3994/volumes" Jan 30 16:20:03 crc kubenswrapper[4740]: I0130 16:20:03.652652 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.186:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.129749 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.129871 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/2.log" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.135719 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/1.log" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.137370 4740 generic.go:334] "Generic (PLEG): container finished" podID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" exitCode=1 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.137454 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.137501 4740 scope.go:117] "RemoveContainer" containerID="25a01ff01c27594aec37c84d6bf3944ae2e5724d91564e6cb1141540689b0196" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.138721 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.139220 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.150710 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerStarted","Data":"53df5f93917c3178d24a1d996c0ffe4db01e882a3c016603c3b91d6c2d54a299"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.150945 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api-log" containerID="cri-o://56310f77a9d8b61071bc359c8f5910d9123931a0d47aac44531ca0cc78325022" gracePeriod=30 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.151281 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.151319 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api" containerID="cri-o://53df5f93917c3178d24a1d996c0ffe4db01e882a3c016603c3b91d6c2d54a299" gracePeriod=30 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.175143 4740 generic.go:334] "Generic (PLEG): container finished" podID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerID="18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340" exitCode=0 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.175261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerDied","Data":"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.175301 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"95fa343f-47ce-425f-a254-58264f0a3f6b","Type":"ContainerDied","Data":"239fda0a97f69a459b895a9765a6fb59174f1d4660ea0e72a19a91b9de60faf6"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.175399 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.182467 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.182568 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.182990 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.183049 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mkq7\" (UniqueName: \"kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.183080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.183112 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.183182 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts\") pod \"95fa343f-47ce-425f-a254-58264f0a3f6b\" (UID: \"95fa343f-47ce-425f-a254-58264f0a3f6b\") " Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.188724 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.189946 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.238611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts" (OuterVolumeSpecName: "scripts") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.252446 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=5.252387612 podStartE2EDuration="5.252387612s" podCreationTimestamp="2026-01-30 16:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:04.20472009 +0000 UTC m=+1452.841782699" watchObservedRunningTime="2026-01-30 16:20:04.252387612 +0000 UTC m=+1452.889450211" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.265723 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7" (OuterVolumeSpecName: "kube-api-access-2mkq7") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "kube-api-access-2mkq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.280059 4740 generic.go:334] "Generic (PLEG): container finished" podID="09b3c286-aa27-4b55-8b05-50484d643da5" containerID="a63d29a8ebcc08e30800005b156a280a09f7177a709e0995439303d78954c1cf" exitCode=1 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.280102 4740 generic.go:334] "Generic (PLEG): container finished" podID="09b3c286-aa27-4b55-8b05-50484d643da5" containerID="1f6d81e5a78f35608233e14df2c88c5163b48bf8f79d11f80736c5923c93a629" exitCode=143 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.280194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"a63d29a8ebcc08e30800005b156a280a09f7177a709e0995439303d78954c1cf"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.280231 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"1f6d81e5a78f35608233e14df2c88c5163b48bf8f79d11f80736c5923c93a629"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.287385 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mkq7\" (UniqueName: \"kubernetes.io/projected/95fa343f-47ce-425f-a254-58264f0a3f6b-kube-api-access-2mkq7\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.287417 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.287431 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.287444 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95fa343f-47ce-425f-a254-58264f0a3f6b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.306897 4740 scope.go:117] "RemoveContainer" containerID="5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.312589 4740 generic.go:334] "Generic (PLEG): container finished" podID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerID="4c3bfd8aefcf7bc397da39e3eee197876d12b6d20b76066b35b9454352f46fa0" exitCode=0 Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.315073 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" event={"ID":"eca56045-0ade-4faf-b0a2-17a4702c1fd8","Type":"ContainerDied","Data":"4c3bfd8aefcf7bc397da39e3eee197876d12b6d20b76066b35b9454352f46fa0"} Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.412452 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.420684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.459777 4740 scope.go:117] "RemoveContainer" containerID="18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.463536 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data" (OuterVolumeSpecName: "config-data") pod "95fa343f-47ce-425f-a254-58264f0a3f6b" (UID: "95fa343f-47ce-425f-a254-58264f0a3f6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.505866 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.505904 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.505917 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95fa343f-47ce-425f-a254-58264f0a3f6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.657800 4740 scope.go:117] "RemoveContainer" containerID="5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.672929 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4\": container with ID starting with 5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4 not found: ID does not exist" containerID="5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.673010 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4"} err="failed to get container status \"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4\": rpc error: code = NotFound desc = could not find container \"5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4\": container with ID starting with 5022d259792d3e2c98e44fff773b61312b6bbd87ed0ca0c553268f6a89b103a4 not found: ID does not exist" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.673050 4740 scope.go:117] "RemoveContainer" containerID="18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.694394 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340\": container with ID starting with 18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340 not found: ID does not exist" containerID="18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.694549 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340"} err="failed to get container status \"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340\": rpc error: code = NotFound desc = could not find container \"18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340\": container with ID starting with 18de1db65336f014c9bf866283528b49c56acd64c6fe0a255c907fb64ad0f340 not found: ID does not exist" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.694605 4740 scope.go:117] "RemoveContainer" containerID="0b799a2c7b13c34a456b41895d90df2c90835375fb77e146374bcea0b3a62dbb" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.732628 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.767434 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.793713 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.797020 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="ceilometer-central-agent" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797061 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="ceilometer-central-agent" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.797116 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="dnsmasq-dns" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797126 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="dnsmasq-dns" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.797181 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="sg-core" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797189 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="sg-core" Jan 30 16:20:04 crc kubenswrapper[4740]: E0130 16:20:04.797241 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="init" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797252 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="init" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797639 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="ceilometer-central-agent" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797659 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" containerName="sg-core" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.797683 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="dnsmasq-dns" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.800023 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.812000 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.812576 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.832999 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869611 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869728 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869766 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869830 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.869989 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xj2\" (UniqueName: \"kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.947576 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.972329 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.995603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.995719 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.995792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.996064 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.996224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2xj2\" (UniqueName: \"kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:04 crc kubenswrapper[4740]: I0130 16:20:04.996315 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:04.995453 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:04.997524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:04.997860 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.013425 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.022981 4740 scope.go:117] "RemoveContainer" containerID="70b7474a4233a59043fb6e39a447187a286e78aab0acf7c3649d4ff8171f8b61" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.031682 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.039071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.053694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2xj2\" (UniqueName: \"kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2\") pod \"ceilometer-0\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.097796 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs\") pod \"09b3c286-aa27-4b55-8b05-50484d643da5\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.097924 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle\") pod \"09b3c286-aa27-4b55-8b05-50484d643da5\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.098036 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom\") pod \"09b3c286-aa27-4b55-8b05-50484d643da5\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.098118 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data\") pod \"09b3c286-aa27-4b55-8b05-50484d643da5\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.098173 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwpz\" (UniqueName: \"kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz\") pod \"09b3c286-aa27-4b55-8b05-50484d643da5\" (UID: \"09b3c286-aa27-4b55-8b05-50484d643da5\") " Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.103486 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs" (OuterVolumeSpecName: "logs") pod "09b3c286-aa27-4b55-8b05-50484d643da5" (UID: "09b3c286-aa27-4b55-8b05-50484d643da5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.118215 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz" (OuterVolumeSpecName: "kube-api-access-hrwpz") pod "09b3c286-aa27-4b55-8b05-50484d643da5" (UID: "09b3c286-aa27-4b55-8b05-50484d643da5"). InnerVolumeSpecName "kube-api-access-hrwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.122459 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "09b3c286-aa27-4b55-8b05-50484d643da5" (UID: "09b3c286-aa27-4b55-8b05-50484d643da5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.204215 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3c286-aa27-4b55-8b05-50484d643da5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.204694 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.204709 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrwpz\" (UniqueName: \"kubernetes.io/projected/09b3c286-aa27-4b55-8b05-50484d643da5-kube-api-access-hrwpz\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.219687 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b3c286-aa27-4b55-8b05-50484d643da5" (UID: "09b3c286-aa27-4b55-8b05-50484d643da5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.266064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data" (OuterVolumeSpecName: "config-data") pod "09b3c286-aa27-4b55-8b05-50484d643da5" (UID: "09b3c286-aa27-4b55-8b05-50484d643da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.274336 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.306811 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.306852 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3c286-aa27-4b55-8b05-50484d643da5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.349246 4740 generic.go:334] "Generic (PLEG): container finished" podID="00c33ca9-470b-4f78-891a-7e95cd279000" containerID="56310f77a9d8b61071bc359c8f5910d9123931a0d47aac44531ca0cc78325022" exitCode=143 Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.377899 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fa343f-47ce-425f-a254-58264f0a3f6b" path="/var/lib/kubelet/pods/95fa343f-47ce-425f-a254-58264f0a3f6b/volumes" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.378501 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9587c4-75g67" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.378957 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerDied","Data":"56310f77a9d8b61071bc359c8f5910d9123931a0d47aac44531ca0cc78325022"} Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.378996 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9587c4-75g67" event={"ID":"09b3c286-aa27-4b55-8b05-50484d643da5","Type":"ContainerDied","Data":"8e482167eecd98609a08b7a62027a3802dd60646b8b8b8a119d08eb6c8ad8ed7"} Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.379031 4740 scope.go:117] "RemoveContainer" containerID="a63d29a8ebcc08e30800005b156a280a09f7177a709e0995439303d78954c1cf" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.398062 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" event={"ID":"eca56045-0ade-4faf-b0a2-17a4702c1fd8","Type":"ContainerStarted","Data":"7c0f9a5e1fc77cf1e86ea6785c5d2a1f2d86c1e961b577acd5d492309921f8ca"} Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.398158 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.423505 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/2.log" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.429632 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:05 crc kubenswrapper[4740]: E0130 16:20:05.429918 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.452021 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.458933 4740 scope.go:117] "RemoveContainer" containerID="1f6d81e5a78f35608233e14df2c88c5163b48bf8f79d11f80736c5923c93a629" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.465189 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-bf9587c4-75g67"] Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.500088 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" podStartSLOduration=6.500069081 podStartE2EDuration="6.500069081s" podCreationTimestamp="2026-01-30 16:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:05.497054266 +0000 UTC m=+1454.134116885" watchObservedRunningTime="2026-01-30 16:20:05.500069081 +0000 UTC m=+1454.137131680" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.786755 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-p77nv" podUID="e2398e05-2c84-4851-922a-3e6a7c9e3994" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Jan 30 16:20:05 crc kubenswrapper[4740]: I0130 16:20:05.901334 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:06 crc kubenswrapper[4740]: I0130 16:20:06.464743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerStarted","Data":"fcf6ecfbf685cbeb60f1d67ed331064e6a1aabf5bce1200272df6fb7cf238c37"} Jan 30 16:20:06 crc kubenswrapper[4740]: I0130 16:20:06.967005 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-59f5786cfd-w4tqb" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.361730 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" path="/var/lib/kubelet/pods/09b3c286-aa27-4b55-8b05-50484d643da5/volumes" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.444387 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.445673 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:07 crc kubenswrapper[4740]: E0130 16:20:07.445926 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.450524 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.179:9696/\": dial tcp 10.217.0.179:9696: connect: connection refused" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.543799 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"eba7c81b-ae84-4672-9108-001326602860","Type":"ContainerStarted","Data":"b459342f6fe5c138a85d106c96e75341b667576e67e335987444cab2b0653d88"} Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.577188 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.901578125 podStartE2EDuration="8.577160018s" podCreationTimestamp="2026-01-30 16:19:59 +0000 UTC" firstStartedPulling="2026-01-30 16:20:02.222865954 +0000 UTC m=+1450.859928563" lastFinishedPulling="2026-01-30 16:20:06.898447857 +0000 UTC m=+1455.535510456" observedRunningTime="2026-01-30 16:20:07.575707282 +0000 UTC m=+1456.212769891" watchObservedRunningTime="2026-01-30 16:20:07.577160018 +0000 UTC m=+1456.214222617" Jan 30 16:20:07 crc kubenswrapper[4740]: I0130 16:20:07.606999 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:08 crc kubenswrapper[4740]: I0130 16:20:08.558909 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerStarted","Data":"4d5729d09c8fa83e4a2c93012becd422234588b01b9d93a2b471f14e9fe83e2f"} Jan 30 16:20:08 crc kubenswrapper[4740]: I0130 16:20:08.694695 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.186:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.084763 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.186119 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.571681 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="cinder-scheduler" containerID="cri-o://ccdc04bd64a8a05abc7f0e79b1f0ea2c126cec566db039bc6fcd517d51850bb4" gracePeriod=30 Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.572674 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="probe" containerID="cri-o://a8ed13bfd769619a43889a60e55c7b8ebba6ba0d9f67356d707a259e919fef5f" gracePeriod=30 Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.573000 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="eba7c81b-ae84-4672-9108-001326602860" containerName="cloudkitty-proc" containerID="cri-o://b459342f6fe5c138a85d106c96e75341b667576e67e335987444cab2b0653d88" gracePeriod=30 Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.876987 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 16:20:09 crc kubenswrapper[4740]: E0130 16:20:09.877686 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.877715 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: E0130 16:20:09.877748 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.877756 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: E0130 16:20:09.877765 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.877773 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:09 crc kubenswrapper[4740]: E0130 16:20:09.877795 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.877801 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.878422 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.878451 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.878464 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.878476 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.879651 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.884424 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.884642 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.889040 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5fwd5" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.920652 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.962802 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.962899 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.963057 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwrk\" (UniqueName: \"kubernetes.io/projected/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-kube-api-access-bcwrk\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:09 crc kubenswrapper[4740]: I0130 16:20:09.963090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.065706 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwrk\" (UniqueName: \"kubernetes.io/projected/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-kube-api-access-bcwrk\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.065782 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.065931 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.065985 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.067171 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.110407 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwrk\" (UniqueName: \"kubernetes.io/projected/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-kube-api-access-bcwrk\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.112339 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.112500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0\") " pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.210188 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.903947 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-586b4b4677-4tdp8" podUID="4876d8e9-6662-4958-bb1a-091307ccfd02" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.904701 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-586b4b4677-4tdp8" podUID="4876d8e9-6662-4958-bb1a-091307ccfd02" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:20:10 crc kubenswrapper[4740]: I0130 16:20:10.921608 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-586b4b4677-4tdp8" podUID="4876d8e9-6662-4958-bb1a-091307ccfd02" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.262393 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.267495 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.395336 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7745b764-mmpkw" Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.498260 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.502300 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.636608 4740 generic.go:334] "Generic (PLEG): container finished" podID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerID="a8ed13bfd769619a43889a60e55c7b8ebba6ba0d9f67356d707a259e919fef5f" exitCode=0 Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.636686 4740 generic.go:334] "Generic (PLEG): container finished" podID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerID="ccdc04bd64a8a05abc7f0e79b1f0ea2c126cec566db039bc6fcd517d51850bb4" exitCode=0 Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.636760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerDied","Data":"a8ed13bfd769619a43889a60e55c7b8ebba6ba0d9f67356d707a259e919fef5f"} Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.636806 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerDied","Data":"ccdc04bd64a8a05abc7f0e79b1f0ea2c126cec566db039bc6fcd517d51850bb4"} Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.649651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0","Type":"ContainerStarted","Data":"38d030c64020f1c4d576dac05ce6bffdc703a0a187a82b71b698dc33efee05aa"} Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.659476 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerStarted","Data":"3da0b8620ee547c4b88e733bcef757361a8c92ccab969e3f65a1cd59881afb65"} Jan 30 16:20:11 crc kubenswrapper[4740]: I0130 16:20:11.897841 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.038031 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.038202 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.038134 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.038432 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.039747 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.039830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n82qm\" (UniqueName: \"kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.039905 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data\") pod \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\" (UID: \"3c2f8900-2e7a-48ec-8966-f7f8d211c251\") " Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.040999 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c2f8900-2e7a-48ec-8966-f7f8d211c251-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.051684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.053581 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts" (OuterVolumeSpecName: "scripts") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.054737 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm" (OuterVolumeSpecName: "kube-api-access-n82qm") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "kube-api-access-n82qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.143602 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n82qm\" (UniqueName: \"kubernetes.io/projected/3c2f8900-2e7a-48ec-8966-f7f8d211c251-kube-api-access-n82qm\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.143645 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.143655 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.204591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.247180 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.324610 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data" (OuterVolumeSpecName: "config-data") pod "3c2f8900-2e7a-48ec-8966-f7f8d211c251" (UID: "3c2f8900-2e7a-48ec-8966-f7f8d211c251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.351534 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2f8900-2e7a-48ec-8966-f7f8d211c251-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.625994 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:20:12 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:20:12 crc kubenswrapper[4740]: > Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.700310 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3c2f8900-2e7a-48ec-8966-f7f8d211c251","Type":"ContainerDied","Data":"e182b2024cb3b615a4ab7ad32fe6dfcc31fe9666b5c0ff951ccdffa55466e2bc"} Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.700392 4740 scope.go:117] "RemoveContainer" containerID="a8ed13bfd769619a43889a60e55c7b8ebba6ba0d9f67356d707a259e919fef5f" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.700593 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.767576 4740 scope.go:117] "RemoveContainer" containerID="ccdc04bd64a8a05abc7f0e79b1f0ea2c126cec566db039bc6fcd517d51850bb4" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.795977 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.807592 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.840136 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:12 crc kubenswrapper[4740]: E0130 16:20:12.840707 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="probe" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.840727 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="probe" Jan 30 16:20:12 crc kubenswrapper[4740]: E0130 16:20:12.840750 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="cinder-scheduler" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.840757 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="cinder-scheduler" Jan 30 16:20:12 crc kubenswrapper[4740]: E0130 16:20:12.840784 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.840791 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.840984 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3c286-aa27-4b55-8b05-50484d643da5" containerName="barbican-api-log" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.841003 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="cinder-scheduler" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.841015 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" containerName="probe" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.842223 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.846799 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.859296 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967101 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967239 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967334 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97251097-8f48-4938-ba55-ca2ad0e01a6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967384 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms692\" (UniqueName: \"kubernetes.io/projected/97251097-8f48-4938-ba55-ca2ad0e01a6f-kube-api-access-ms692\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967426 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:12 crc kubenswrapper[4740]: I0130 16:20:12.967630 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.069888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.069980 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.070059 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97251097-8f48-4938-ba55-ca2ad0e01a6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.070084 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms692\" (UniqueName: \"kubernetes.io/projected/97251097-8f48-4938-ba55-ca2ad0e01a6f-kube-api-access-ms692\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.070123 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.070165 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.071218 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/97251097-8f48-4938-ba55-ca2ad0e01a6f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.077696 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.078080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.087816 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-scripts\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.091988 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97251097-8f48-4938-ba55-ca2ad0e01a6f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.108724 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms692\" (UniqueName: \"kubernetes.io/projected/97251097-8f48-4938-ba55-ca2ad0e01a6f-kube-api-access-ms692\") pod \"cinder-scheduler-0\" (UID: \"97251097-8f48-4938-ba55-ca2ad0e01a6f\") " pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.222297 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.356050 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2f8900-2e7a-48ec-8966-f7f8d211c251" path="/var/lib/kubelet/pods/3c2f8900-2e7a-48ec-8966-f7f8d211c251/volumes" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.731441 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerStarted","Data":"81774b21b1dff17af2fead7fb0cd54a7abec932c60bdbef24333236609addb78"} Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.740164 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.186:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:20:13 crc kubenswrapper[4740]: I0130 16:20:13.908333 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 16:20:14 crc kubenswrapper[4740]: I0130 16:20:14.567575 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:20:14 crc kubenswrapper[4740]: I0130 16:20:14.678195 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:20:14 crc kubenswrapper[4740]: I0130 16:20:14.678563 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="dnsmasq-dns" containerID="cri-o://cdc41b1425dec44f3c04cd49dcadecacfb349cf9f836dcf649e34dc43f7ec2c0" gracePeriod=10 Jan 30 16:20:14 crc kubenswrapper[4740]: I0130 16:20:14.781785 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"97251097-8f48-4938-ba55-ca2ad0e01a6f","Type":"ContainerStarted","Data":"4c2180c8fbd5275238d72c0a726df7051a160d3bf871078737f0c7b56f8eee5a"} Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.035821 4740 generic.go:334] "Generic (PLEG): container finished" podID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerID="cdc41b1425dec44f3c04cd49dcadecacfb349cf9f836dcf649e34dc43f7ec2c0" exitCode=0 Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.036927 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" event={"ID":"996fb134-f1a9-45ba-bdec-62c17b1fa428","Type":"ContainerDied","Data":"cdc41b1425dec44f3c04cd49dcadecacfb349cf9f836dcf649e34dc43f7ec2c0"} Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.063854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"97251097-8f48-4938-ba55-ca2ad0e01a6f","Type":"ContainerStarted","Data":"2555a4f90f4b0cf3251d58453cceb3ed27e8fd087795cff1a3d924cbf32dc490"} Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.196446 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.304610 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.304704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.304825 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.304969 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.305009 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.305259 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lgh\" (UniqueName: \"kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.321700 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh" (OuterVolumeSpecName: "kube-api-access-t6lgh") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "kube-api-access-t6lgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.409253 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6lgh\" (UniqueName: \"kubernetes.io/projected/996fb134-f1a9-45ba-bdec-62c17b1fa428-kube-api-access-t6lgh\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.475113 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config" (OuterVolumeSpecName: "config") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.475196 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.479073 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:16 crc kubenswrapper[4740]: E0130 16:20:16.514889 4740 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0 podName:996fb134-f1a9-45ba-bdec-62c17b1fa428 nodeName:}" failed. No retries permitted until 2026-01-30 16:20:17.014849639 +0000 UTC m=+1465.651912238 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428") : error deleting /var/lib/kubelet/pods/996fb134-f1a9-45ba-bdec-62c17b1fa428/volume-subpaths: remove /var/lib/kubelet/pods/996fb134-f1a9-45ba-bdec-62c17b1fa428/volume-subpaths: no such file or directory Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.515218 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.515808 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.515850 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.515860 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:16 crc kubenswrapper[4740]: I0130 16:20:16.515869 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.054345 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") pod \"996fb134-f1a9-45ba-bdec-62c17b1fa428\" (UID: \"996fb134-f1a9-45ba-bdec-62c17b1fa428\") " Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.055026 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "996fb134-f1a9-45ba-bdec-62c17b1fa428" (UID: "996fb134-f1a9-45ba-bdec-62c17b1fa428"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.056408 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/996fb134-f1a9-45ba-bdec-62c17b1fa428-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.094820 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" event={"ID":"996fb134-f1a9-45ba-bdec-62c17b1fa428","Type":"ContainerDied","Data":"8aa5b34b638c5e92ed114a274d74432becf35f1059d5c252a0a11a979641ece2"} Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.094912 4740 scope.go:117] "RemoveContainer" containerID="cdc41b1425dec44f3c04cd49dcadecacfb349cf9f836dcf649e34dc43f7ec2c0" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.095239 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-lh2fh" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.203320 4740 scope.go:117] "RemoveContainer" containerID="3326a04848e0479fc6c3a30bc06e71364c5157358fe3c86edd956ec0228a4a1f" Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.238526 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.247707 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-lh2fh"] Jan 30 16:20:17 crc kubenswrapper[4740]: I0130 16:20:17.359185 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" path="/var/lib/kubelet/pods/996fb134-f1a9-45ba-bdec-62c17b1fa428/volumes" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.113880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerStarted","Data":"e7d6b55df297896035a0bf31ab59a4b778959fba560f0523131dc08ddad34564"} Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.114508 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.127597 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"97251097-8f48-4938-ba55-ca2ad0e01a6f","Type":"ContainerStarted","Data":"31cac44e649f28c06eb73165c5b1602571db4574156f68665e9960d4f9a3ea49"} Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.144020 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.190851758 podStartE2EDuration="14.143990738s" podCreationTimestamp="2026-01-30 16:20:04 +0000 UTC" firstStartedPulling="2026-01-30 16:20:05.930623317 +0000 UTC m=+1454.567685926" lastFinishedPulling="2026-01-30 16:20:16.883762307 +0000 UTC m=+1465.520824906" observedRunningTime="2026-01-30 16:20:18.141883935 +0000 UTC m=+1466.778946544" watchObservedRunningTime="2026-01-30 16:20:18.143990738 +0000 UTC m=+1466.781053337" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.177371 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.177332214 podStartE2EDuration="6.177332214s" podCreationTimestamp="2026-01-30 16:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:18.167840919 +0000 UTC m=+1466.804903548" watchObservedRunningTime="2026-01-30 16:20:18.177332214 +0000 UTC m=+1466.814394813" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.223156 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.337410 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:18 crc kubenswrapper[4740]: E0130 16:20:18.338272 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.782927 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.186:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:20:18 crc kubenswrapper[4740]: I0130 16:20:18.798424 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.253427 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77b9c5655-hbm7j"] Jan 30 16:20:20 crc kubenswrapper[4740]: E0130 16:20:20.254724 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="dnsmasq-dns" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.254746 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="dnsmasq-dns" Jan 30 16:20:20 crc kubenswrapper[4740]: E0130 16:20:20.254804 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="init" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.254812 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="init" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.255092 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="996fb134-f1a9-45ba-bdec-62c17b1fa428" containerName="dnsmasq-dns" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.257701 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.262485 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.265540 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.265926 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.294291 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77b9c5655-hbm7j"] Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.355758 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-internal-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359054 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkj2d\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-kube-api-access-mkj2d\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-public-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359210 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-etc-swift\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-config-data\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-log-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-run-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.359574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-combined-ca-bundle\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.466616 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-etc-swift\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.466733 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-config-data\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.466808 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-log-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.466833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-run-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.467005 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-combined-ca-bundle\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.467086 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-internal-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.467191 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkj2d\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-kube-api-access-mkj2d\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.467404 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-public-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.470803 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-log-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.470976 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/476176f1-b9ac-4d2d-90ea-7abfcea252c4-run-httpd\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.479394 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-combined-ca-bundle\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.480068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-internal-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.481109 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-etc-swift\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.486631 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-public-tls-certs\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.492190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkj2d\" (UniqueName: \"kubernetes.io/projected/476176f1-b9ac-4d2d-90ea-7abfcea252c4-kube-api-access-mkj2d\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.501755 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/476176f1-b9ac-4d2d-90ea-7abfcea252c4-config-data\") pod \"swift-proxy-77b9c5655-hbm7j\" (UID: \"476176f1-b9ac-4d2d-90ea-7abfcea252c4\") " pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:20 crc kubenswrapper[4740]: I0130 16:20:20.626257 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:21 crc kubenswrapper[4740]: I0130 16:20:21.522604 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77b9c5655-hbm7j"] Jan 30 16:20:21 crc kubenswrapper[4740]: I0130 16:20:21.964030 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.035949 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037256 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs" (OuterVolumeSpecName: "logs") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037393 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037427 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037471 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037529 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nwcw\" (UniqueName: \"kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037620 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.037701 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom\") pod \"1fd6a6d5-372b-412e-b528-c1329736b727\" (UID: \"1fd6a6d5-372b-412e-b528-c1329736b727\") " Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.038502 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fd6a6d5-372b-412e-b528-c1329736b727-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.038972 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.045210 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.047603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw" (OuterVolumeSpecName: "kube-api-access-4nwcw") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "kube-api-access-4nwcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.048078 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts" (OuterVolumeSpecName: "scripts") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.095561 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.135458 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data" (OuterVolumeSpecName: "config-data") pod "1fd6a6d5-372b-412e-b528-c1329736b727" (UID: "1fd6a6d5-372b-412e-b528-c1329736b727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141521 4740 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fd6a6d5-372b-412e-b528-c1329736b727-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141563 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141583 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141597 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nwcw\" (UniqueName: \"kubernetes.io/projected/1fd6a6d5-372b-412e-b528-c1329736b727-kube-api-access-4nwcw\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141612 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.141625 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fd6a6d5-372b-412e-b528-c1329736b727-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.212792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b9c5655-hbm7j" event={"ID":"476176f1-b9ac-4d2d-90ea-7abfcea252c4","Type":"ContainerStarted","Data":"928e0c0a8da8830f77dc72456af304cea33332175686f6f309e98d8b63a2d2b5"} Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.212942 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b9c5655-hbm7j" event={"ID":"476176f1-b9ac-4d2d-90ea-7abfcea252c4","Type":"ContainerStarted","Data":"35fb81d0624e3d1f2fefd774fb5e27cb49840ffcc3ebc70bc1e96d11f55779a6"} Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.219190 4740 generic.go:334] "Generic (PLEG): container finished" podID="1fd6a6d5-372b-412e-b528-c1329736b727" containerID="2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309" exitCode=137 Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.219252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerDied","Data":"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309"} Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.219291 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fd6a6d5-372b-412e-b528-c1329736b727","Type":"ContainerDied","Data":"8ece6ccc339a2a3bc018458fda225e7396bbcdf30edd344756c7afb07938807f"} Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.219323 4740 scope.go:117] "RemoveContainer" containerID="2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.219559 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.274735 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.306839 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.320113 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:20:22 crc kubenswrapper[4740]: E0130 16:20:22.320876 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.320896 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" Jan 30 16:20:22 crc kubenswrapper[4740]: E0130 16:20:22.320925 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api-log" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.320933 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api-log" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.321150 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api-log" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.321170 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" containerName="cinder-api" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.322713 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.325880 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.326264 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.330366 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.364458 4740 scope.go:117] "RemoveContainer" containerID="3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.364758 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.411561 4740 scope.go:117] "RemoveContainer" containerID="2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309" Jan 30 16:20:22 crc kubenswrapper[4740]: E0130 16:20:22.412142 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309\": container with ID starting with 2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309 not found: ID does not exist" containerID="2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.412181 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309"} err="failed to get container status \"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309\": rpc error: code = NotFound desc = could not find container \"2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309\": container with ID starting with 2d09c9f87e72b1627703e5223dbaf1de05dfd9f9162a2876aa3d953935841309 not found: ID does not exist" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.412210 4740 scope.go:117] "RemoveContainer" containerID="3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77" Jan 30 16:20:22 crc kubenswrapper[4740]: E0130 16:20:22.412689 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77\": container with ID starting with 3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77 not found: ID does not exist" containerID="3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.412716 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77"} err="failed to get container status \"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77\": rpc error: code = NotFound desc = could not find container \"3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77\": container with ID starting with 3e9c0e4f3d87965a2657ff15db1cda90293b14d6ea2dfcb7bf7307452383bd77 not found: ID does not exist" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463502 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22570e91-9697-47f0-81d5-c38551f883b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463602 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbsh\" (UniqueName: \"kubernetes.io/projected/22570e91-9697-47f0-81d5-c38551f883b2-kube-api-access-5xbsh\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463648 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22570e91-9697-47f0-81d5-c38551f883b2-logs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463731 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463772 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463824 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.463973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-scripts\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.465549 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.568426 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xbsh\" (UniqueName: \"kubernetes.io/projected/22570e91-9697-47f0-81d5-c38551f883b2-kube-api-access-5xbsh\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570080 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22570e91-9697-47f0-81d5-c38551f883b2-logs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570298 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570673 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570836 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.570951 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-scripts\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.571218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.572511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22570e91-9697-47f0-81d5-c38551f883b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.572673 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22570e91-9697-47f0-81d5-c38551f883b2-logs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.572823 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22570e91-9697-47f0-81d5-c38551f883b2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.581030 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.582121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.582944 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-config-data-custom\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.589099 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.589727 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.593986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22570e91-9697-47f0-81d5-c38551f883b2-scripts\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.594703 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xbsh\" (UniqueName: \"kubernetes.io/projected/22570e91-9697-47f0-81d5-c38551f883b2-kube-api-access-5xbsh\") pod \"cinder-api-0\" (UID: \"22570e91-9697-47f0-81d5-c38551f883b2\") " pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.613713 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:20:22 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:20:22 crc kubenswrapper[4740]: > Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.653873 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.752381 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.752838 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-central-agent" containerID="cri-o://4d5729d09c8fa83e4a2c93012becd422234588b01b9d93a2b471f14e9fe83e2f" gracePeriod=30 Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.753203 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="proxy-httpd" containerID="cri-o://e7d6b55df297896035a0bf31ab59a4b778959fba560f0523131dc08ddad34564" gracePeriod=30 Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.753804 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-notification-agent" containerID="cri-o://3da0b8620ee547c4b88e733bcef757361a8c92ccab969e3f65a1cd59881afb65" gracePeriod=30 Jan 30 16:20:22 crc kubenswrapper[4740]: I0130 16:20:22.753875 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="sg-core" containerID="cri-o://81774b21b1dff17af2fead7fb0cd54a7abec932c60bdbef24333236609addb78" gracePeriod=30 Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.236878 4740 generic.go:334] "Generic (PLEG): container finished" podID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerID="e7d6b55df297896035a0bf31ab59a4b778959fba560f0523131dc08ddad34564" exitCode=0 Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.237303 4740 generic.go:334] "Generic (PLEG): container finished" podID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerID="81774b21b1dff17af2fead7fb0cd54a7abec932c60bdbef24333236609addb78" exitCode=2 Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.236951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerDied","Data":"e7d6b55df297896035a0bf31ab59a4b778959fba560f0523131dc08ddad34564"} Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.237409 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerDied","Data":"81774b21b1dff17af2fead7fb0cd54a7abec932c60bdbef24333236609addb78"} Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.240236 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b9c5655-hbm7j" event={"ID":"476176f1-b9ac-4d2d-90ea-7abfcea252c4","Type":"ContainerStarted","Data":"acc15ba9c08e9f8eff157d45f2a22a7ac966b300f35bebf0b1196cbced2dc724"} Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.240472 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.286417 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77b9c5655-hbm7j" podStartSLOduration=3.286385936 podStartE2EDuration="3.286385936s" podCreationTimestamp="2026-01-30 16:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:23.276122782 +0000 UTC m=+1471.913185381" watchObservedRunningTime="2026-01-30 16:20:23.286385936 +0000 UTC m=+1471.923448535" Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.328245 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.381327 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd6a6d5-372b-412e-b528-c1329736b727" path="/var/lib/kubelet/pods/1fd6a6d5-372b-412e-b528-c1329736b727/volumes" Jan 30 16:20:23 crc kubenswrapper[4740]: I0130 16:20:23.921461 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 16:20:24 crc kubenswrapper[4740]: I0130 16:20:24.265838 4740 generic.go:334] "Generic (PLEG): container finished" podID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerID="3da0b8620ee547c4b88e733bcef757361a8c92ccab969e3f65a1cd59881afb65" exitCode=0 Jan 30 16:20:24 crc kubenswrapper[4740]: I0130 16:20:24.266252 4740 generic.go:334] "Generic (PLEG): container finished" podID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerID="4d5729d09c8fa83e4a2c93012becd422234588b01b9d93a2b471f14e9fe83e2f" exitCode=0 Jan 30 16:20:24 crc kubenswrapper[4740]: I0130 16:20:24.265931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerDied","Data":"3da0b8620ee547c4b88e733bcef757361a8c92ccab969e3f65a1cd59881afb65"} Jan 30 16:20:24 crc kubenswrapper[4740]: I0130 16:20:24.266544 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:24 crc kubenswrapper[4740]: I0130 16:20:24.266617 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerDied","Data":"4d5729d09c8fa83e4a2c93012becd422234588b01b9d93a2b471f14e9fe83e2f"} Jan 30 16:20:29 crc kubenswrapper[4740]: W0130 16:20:29.840759 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22570e91_9697_47f0_81d5_c38551f883b2.slice/crio-0790423e97eab6cfea41e8da7cd04aadd736fc6d052565ad79f9244f9a5609d5 WatchSource:0}: Error finding container 0790423e97eab6cfea41e8da7cd04aadd736fc6d052565ad79f9244f9a5609d5: Status 404 returned error can't find the container with id 0790423e97eab6cfea41e8da7cd04aadd736fc6d052565ad79f9244f9a5609d5 Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.335768 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.374582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22570e91-9697-47f0-81d5-c38551f883b2","Type":"ContainerStarted","Data":"0790423e97eab6cfea41e8da7cd04aadd736fc6d052565ad79f9244f9a5609d5"} Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.423194 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512327 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512443 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512575 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512665 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512866 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.512945 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2xj2\" (UniqueName: \"kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2\") pod \"5860cf35-81e0-4ee4-b98b-04e1192da186\" (UID: \"5860cf35-81e0-4ee4-b98b-04e1192da186\") " Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.513610 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.514235 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.515876 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.515907 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5860cf35-81e0-4ee4-b98b-04e1192da186-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.522598 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2" (OuterVolumeSpecName: "kube-api-access-v2xj2") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "kube-api-access-v2xj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.530013 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts" (OuterVolumeSpecName: "scripts") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.572962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.618095 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2xj2\" (UniqueName: \"kubernetes.io/projected/5860cf35-81e0-4ee4-b98b-04e1192da186-kube-api-access-v2xj2\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.618135 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.618148 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.647245 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.647325 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77b9c5655-hbm7j" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.663246 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.686223 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data" (OuterVolumeSpecName: "config-data") pod "5860cf35-81e0-4ee4-b98b-04e1192da186" (UID: "5860cf35-81e0-4ee4-b98b-04e1192da186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.722870 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:30 crc kubenswrapper[4740]: I0130 16:20:30.722910 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5860cf35-81e0-4ee4-b98b-04e1192da186-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.395949 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5860cf35-81e0-4ee4-b98b-04e1192da186","Type":"ContainerDied","Data":"fcf6ecfbf685cbeb60f1d67ed331064e6a1aabf5bce1200272df6fb7cf238c37"} Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.396726 4740 scope.go:117] "RemoveContainer" containerID="e7d6b55df297896035a0bf31ab59a4b778959fba560f0523131dc08ddad34564" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.397809 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.409191 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/3.log" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.410698 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/2.log" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.411135 4740 generic.go:334] "Generic (PLEG): container finished" podID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" exitCode=1 Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.411296 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c"} Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.412440 4740 scope.go:117] "RemoveContainer" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" Jan 30 16:20:31 crc kubenswrapper[4740]: E0130 16:20:31.412786 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.431479 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22570e91-9697-47f0-81d5-c38551f883b2","Type":"ContainerStarted","Data":"a51fbe4ce907190aa9e890388131f396e28b7bd6943de82f4c980a6a5c966380"} Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.446706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0","Type":"ContainerStarted","Data":"c864b4948c8f30b0c07b7b3a5215476abe20e45c20299899e4d5a126be4d0c3f"} Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.467549 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.490479 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.491632 4740 scope.go:117] "RemoveContainer" containerID="81774b21b1dff17af2fead7fb0cd54a7abec932c60bdbef24333236609addb78" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.509966 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:31 crc kubenswrapper[4740]: E0130 16:20:31.511031 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="proxy-httpd" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="proxy-httpd" Jan 30 16:20:31 crc kubenswrapper[4740]: E0130 16:20:31.511104 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-central-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511113 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-central-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: E0130 16:20:31.511135 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="sg-core" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511144 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="sg-core" Jan 30 16:20:31 crc kubenswrapper[4740]: E0130 16:20:31.511159 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-notification-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511167 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-notification-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511627 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="sg-core" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511663 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-central-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511674 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="ceilometer-notification-agent" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.511691 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" containerName="proxy-httpd" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.514962 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.517681 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.517849 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.519208 4740 scope.go:117] "RemoveContainer" containerID="3da0b8620ee547c4b88e733bcef757361a8c92ccab969e3f65a1cd59881afb65" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.557431 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.561034 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.902280396 podStartE2EDuration="22.560999125s" podCreationTimestamp="2026-01-30 16:20:09 +0000 UTC" firstStartedPulling="2026-01-30 16:20:11.332624903 +0000 UTC m=+1459.969687502" lastFinishedPulling="2026-01-30 16:20:29.991343632 +0000 UTC m=+1478.628406231" observedRunningTime="2026-01-30 16:20:31.483903533 +0000 UTC m=+1480.120966142" watchObservedRunningTime="2026-01-30 16:20:31.560999125 +0000 UTC m=+1480.198061724" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.567665 4740 scope.go:117] "RemoveContainer" containerID="4d5729d09c8fa83e4a2c93012becd422234588b01b9d93a2b471f14e9fe83e2f" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.602276 4740 scope.go:117] "RemoveContainer" containerID="0c11fb0fe12e869ee7d51dfbfe9b0919eacfc9e1b92425405d7b49aa0132350b" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.662786 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.662867 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.662920 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.662947 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlt96\" (UniqueName: \"kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.662966 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.663006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.663063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.765916 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766142 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766169 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlt96\" (UniqueName: \"kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766196 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766256 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.766456 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.767103 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.771936 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.774779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.774832 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.775468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.777664 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.794694 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlt96\" (UniqueName: \"kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96\") pod \"ceilometer-0\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " pod="openstack/ceilometer-0" Jan 30 16:20:31 crc kubenswrapper[4740]: I0130 16:20:31.836871 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.428386 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.464032 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerStarted","Data":"1d9670502bfcbc3c1ada400dff03d026ef337540b754009bca5156530a3ca6c0"} Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.469039 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/3.log" Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.472772 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22570e91-9697-47f0-81d5-c38551f883b2","Type":"ContainerStarted","Data":"ff5814d02547a19e229e9b58043f336a6f3d6a00cfbd87bb36c2dba5b7d28566"} Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.499907 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.499884447 podStartE2EDuration="10.499884447s" podCreationTimestamp="2026-01-30 16:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:32.499755854 +0000 UTC m=+1481.136818473" watchObservedRunningTime="2026-01-30 16:20:32.499884447 +0000 UTC m=+1481.136947036" Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.585870 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:20:32 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:20:32 crc kubenswrapper[4740]: > Jan 30 16:20:32 crc kubenswrapper[4740]: I0130 16:20:32.654419 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 16:20:33 crc kubenswrapper[4740]: I0130 16:20:33.355181 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5860cf35-81e0-4ee4-b98b-04e1192da186" path="/var/lib/kubelet/pods/5860cf35-81e0-4ee4-b98b-04e1192da186/volumes" Jan 30 16:20:33 crc kubenswrapper[4740]: I0130 16:20:33.815919 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.513203 4740 generic.go:334] "Generic (PLEG): container finished" podID="00c33ca9-470b-4f78-891a-7e95cd279000" containerID="53df5f93917c3178d24a1d996c0ffe4db01e882a3c016603c3b91d6c2d54a299" exitCode=137 Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.513265 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerDied","Data":"53df5f93917c3178d24a1d996c0ffe4db01e882a3c016603c3b91d6c2d54a299"} Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.517451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerStarted","Data":"c28b2a422df3802eb1cac5fbd264e20af55270cd79c15b78269818a8cdd2f16a"} Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.827403 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875528 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk5tv\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875599 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875619 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875647 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875804 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875834 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.875884 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs\") pod \"00c33ca9-470b-4f78-891a-7e95cd279000\" (UID: \"00c33ca9-470b-4f78-891a-7e95cd279000\") " Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.881674 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs" (OuterVolumeSpecName: "logs") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.889118 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs" (OuterVolumeSpecName: "certs") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.896624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.908650 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv" (OuterVolumeSpecName: "kube-api-access-hk5tv") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "kube-api-access-hk5tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.919654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts" (OuterVolumeSpecName: "scripts") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.921529 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data" (OuterVolumeSpecName: "config-data") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.939674 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c33ca9-470b-4f78-891a-7e95cd279000" (UID: "00c33ca9-470b-4f78-891a-7e95cd279000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978587 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978642 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978656 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978665 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk5tv\" (UniqueName: \"kubernetes.io/projected/00c33ca9-470b-4f78-891a-7e95cd279000-kube-api-access-hk5tv\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978676 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978686 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c33ca9-470b-4f78-891a-7e95cd279000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:34 crc kubenswrapper[4740]: I0130 16:20:34.978696 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c33ca9-470b-4f78-891a-7e95cd279000-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.528900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"00c33ca9-470b-4f78-891a-7e95cd279000","Type":"ContainerDied","Data":"72365e429227090f6fcbde06e702e438b1e8bc0c02e2a7e0df62a9fded872769"} Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.529337 4740 scope.go:117] "RemoveContainer" containerID="53df5f93917c3178d24a1d996c0ffe4db01e882a3c016603c3b91d6c2d54a299" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.530621 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.531608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerStarted","Data":"0163a3a330b4305964abd68af8f255615cee14b9162f7673cbc0ce731b026e11"} Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.556446 4740 scope.go:117] "RemoveContainer" containerID="56310f77a9d8b61071bc359c8f5910d9123931a0d47aac44531ca0cc78325022" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.580060 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.593652 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.619166 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:35 crc kubenswrapper[4740]: E0130 16:20:35.619955 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.619981 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api" Jan 30 16:20:35 crc kubenswrapper[4740]: E0130 16:20:35.620016 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api-log" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.620129 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api-log" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.620427 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api-log" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.620449 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" containerName="cloudkitty-api" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.622224 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.630132 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.630657 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.630864 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.671177 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.694819 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.694974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695045 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695108 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695159 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l8j9\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695191 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695245 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695274 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.695331 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.797700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.797906 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.797993 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798048 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798088 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l8j9\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798117 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798156 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.798874 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.803907 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.804936 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.805081 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.805358 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.805756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.810261 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.819053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.835411 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l8j9\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9\") pod \"cloudkitty-api-0\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " pod="openstack/cloudkitty-api-0" Jan 30 16:20:35 crc kubenswrapper[4740]: I0130 16:20:35.949686 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:20:36 crc kubenswrapper[4740]: I0130 16:20:36.558106 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerStarted","Data":"9c72d3d9dd7438c7ccdd06cde51c6603d8c35dec72448b09a16e877af64b9d1a"} Jan 30 16:20:36 crc kubenswrapper[4740]: I0130 16:20:36.599838 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.355554 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c33ca9-470b-4f78-891a-7e95cd279000" path="/var/lib/kubelet/pods/00c33ca9-470b-4f78-891a-7e95cd279000/volumes" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.444780 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.445321 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.445902 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.179:9696/\": dial tcp 10.217.0.179:9696: connect: connection refused" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.446764 4740 scope.go:117] "RemoveContainer" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" Jan 30 16:20:37 crc kubenswrapper[4740]: E0130 16:20:37.447091 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=neutron-httpd pod=neutron-64bd6c9fd8-9p6nz_openstack(2701590d-93ff-476c-8ad7-fd118b873a3e)\"" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.574900 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerStarted","Data":"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63"} Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.574970 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerStarted","Data":"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136"} Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.574990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerStarted","Data":"d41a2c727b04522a2b06a8e8b1e40bd4505f18a37c610317435484894ca33709"} Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.575148 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 30 16:20:37 crc kubenswrapper[4740]: I0130 16:20:37.610920 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.610885177 podStartE2EDuration="2.610885177s" podCreationTimestamp="2026-01-30 16:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:37.60414969 +0000 UTC m=+1486.241212299" watchObservedRunningTime="2026-01-30 16:20:37.610885177 +0000 UTC m=+1486.247947796" Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.602455 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerStarted","Data":"07f4eb455fcfd795fec2f06da3b80fb8d4a39f33c897b9886ff1fac1374356df"} Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.604719 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-central-agent" containerID="cri-o://c28b2a422df3802eb1cac5fbd264e20af55270cd79c15b78269818a8cdd2f16a" gracePeriod=30 Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.604761 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.604933 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="proxy-httpd" containerID="cri-o://07f4eb455fcfd795fec2f06da3b80fb8d4a39f33c897b9886ff1fac1374356df" gracePeriod=30 Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.604993 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="sg-core" containerID="cri-o://9c72d3d9dd7438c7ccdd06cde51c6603d8c35dec72448b09a16e877af64b9d1a" gracePeriod=30 Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.605041 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-notification-agent" containerID="cri-o://0163a3a330b4305964abd68af8f255615cee14b9162f7673cbc0ce731b026e11" gracePeriod=30 Jan 30 16:20:39 crc kubenswrapper[4740]: I0130 16:20:39.645303 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.736093252 podStartE2EDuration="8.645276344s" podCreationTimestamp="2026-01-30 16:20:31 +0000 UTC" firstStartedPulling="2026-01-30 16:20:32.429722197 +0000 UTC m=+1481.066784796" lastFinishedPulling="2026-01-30 16:20:38.338905289 +0000 UTC m=+1486.975967888" observedRunningTime="2026-01-30 16:20:39.638037604 +0000 UTC m=+1488.275100203" watchObservedRunningTime="2026-01-30 16:20:39.645276344 +0000 UTC m=+1488.282338943" Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.472958 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.622961 4740 generic.go:334] "Generic (PLEG): container finished" podID="232652ff-1455-468e-a812-d7699d96ffbe" containerID="07f4eb455fcfd795fec2f06da3b80fb8d4a39f33c897b9886ff1fac1374356df" exitCode=0 Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.623022 4740 generic.go:334] "Generic (PLEG): container finished" podID="232652ff-1455-468e-a812-d7699d96ffbe" containerID="9c72d3d9dd7438c7ccdd06cde51c6603d8c35dec72448b09a16e877af64b9d1a" exitCode=2 Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.623043 4740 generic.go:334] "Generic (PLEG): container finished" podID="232652ff-1455-468e-a812-d7699d96ffbe" containerID="0163a3a330b4305964abd68af8f255615cee14b9162f7673cbc0ce731b026e11" exitCode=0 Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.623194 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerDied","Data":"07f4eb455fcfd795fec2f06da3b80fb8d4a39f33c897b9886ff1fac1374356df"} Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.623242 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerDied","Data":"9c72d3d9dd7438c7ccdd06cde51c6603d8c35dec72448b09a16e877af64b9d1a"} Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.623258 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerDied","Data":"0163a3a330b4305964abd68af8f255615cee14b9162f7673cbc0ce731b026e11"} Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.630081 4740 generic.go:334] "Generic (PLEG): container finished" podID="eba7c81b-ae84-4672-9108-001326602860" containerID="b459342f6fe5c138a85d106c96e75341b667576e67e335987444cab2b0653d88" exitCode=137 Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.630151 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"eba7c81b-ae84-4672-9108-001326602860","Type":"ContainerDied","Data":"b459342f6fe5c138a85d106c96e75341b667576e67e335987444cab2b0653d88"} Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.854800 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-586b4b4677-4tdp8" Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.956792 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:20:40 crc kubenswrapper[4740]: I0130 16:20:40.961040 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64bd6c9fd8-9p6nz" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" containerID="cri-o://7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45" gracePeriod=30 Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.221275 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.275482 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.275597 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.275638 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.276008 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.276080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.276160 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfsj\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj\") pod \"eba7c81b-ae84-4672-9108-001326602860\" (UID: \"eba7c81b-ae84-4672-9108-001326602860\") " Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.288548 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts" (OuterVolumeSpecName: "scripts") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.296302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.304302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs" (OuterVolumeSpecName: "certs") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.321157 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj" (OuterVolumeSpecName: "kube-api-access-trfsj") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "kube-api-access-trfsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.326790 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data" (OuterVolumeSpecName: "config-data") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.332942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eba7c81b-ae84-4672-9108-001326602860" (UID: "eba7c81b-ae84-4672-9108-001326602860"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380182 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfsj\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-kube-api-access-trfsj\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380649 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380661 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380697 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380706 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/eba7c81b-ae84-4672-9108-001326602860-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.380716 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eba7c81b-ae84-4672-9108-001326602860-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.660000 4740 generic.go:334] "Generic (PLEG): container finished" podID="232652ff-1455-468e-a812-d7699d96ffbe" containerID="c28b2a422df3802eb1cac5fbd264e20af55270cd79c15b78269818a8cdd2f16a" exitCode=0 Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.660089 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerDied","Data":"c28b2a422df3802eb1cac5fbd264e20af55270cd79c15b78269818a8cdd2f16a"} Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.661727 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"eba7c81b-ae84-4672-9108-001326602860","Type":"ContainerDied","Data":"309f023053989adce89ffe697f8628d7921ef6287849b24ec77f24c94ceb5650"} Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.661770 4740 scope.go:117] "RemoveContainer" containerID="b459342f6fe5c138a85d106c96e75341b667576e67e335987444cab2b0653d88" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.661917 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.701719 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.714556 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.729440 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:41 crc kubenswrapper[4740]: E0130 16:20:41.730218 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba7c81b-ae84-4672-9108-001326602860" containerName="cloudkitty-proc" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.730248 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba7c81b-ae84-4672-9108-001326602860" containerName="cloudkitty-proc" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.730554 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba7c81b-ae84-4672-9108-001326602860" containerName="cloudkitty-proc" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.731785 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.739588 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.744425 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795197 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w4pq\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795310 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795502 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795553 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795595 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.795931 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.898599 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w4pq\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.899077 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.899123 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.899150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.899174 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.899301 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.905779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.907052 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.907221 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.907776 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.912479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:41 crc kubenswrapper[4740]: I0130 16:20:41.917804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w4pq\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq\") pod \"cloudkitty-proc-0\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.000191 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.082191 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.103794 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlt96\" (UniqueName: \"kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.103839 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.103888 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.104080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.104170 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.104253 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.104316 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd\") pod \"232652ff-1455-468e-a812-d7699d96ffbe\" (UID: \"232652ff-1455-468e-a812-d7699d96ffbe\") " Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.105466 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.106929 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.114957 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96" (OuterVolumeSpecName: "kube-api-access-jlt96") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "kube-api-access-jlt96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.115006 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts" (OuterVolumeSpecName: "scripts") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.166134 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.207367 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.207672 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.207683 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/232652ff-1455-468e-a812-d7699d96ffbe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.207693 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlt96\" (UniqueName: \"kubernetes.io/projected/232652ff-1455-468e-a812-d7699d96ffbe-kube-api-access-jlt96\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.207705 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.218272 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.310214 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.326443 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data" (OuterVolumeSpecName: "config-data") pod "232652ff-1455-468e-a812-d7699d96ffbe" (UID: "232652ff-1455-468e-a812-d7699d96ffbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.413790 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/232652ff-1455-468e-a812-d7699d96ffbe-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.568904 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:20:42 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:20:42 crc kubenswrapper[4740]: > Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.573760 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.574174 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-log" containerID="cri-o://d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999" gracePeriod=30 Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.574538 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-httpd" containerID="cri-o://a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c" gracePeriod=30 Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.702518 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.702670 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"232652ff-1455-468e-a812-d7699d96ffbe","Type":"ContainerDied","Data":"1d9670502bfcbc3c1ada400dff03d026ef337540b754009bca5156530a3ca6c0"} Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.704917 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.704957 4740 scope.go:117] "RemoveContainer" containerID="07f4eb455fcfd795fec2f06da3b80fb8d4a39f33c897b9886ff1fac1374356df" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.741213 4740 scope.go:117] "RemoveContainer" containerID="9c72d3d9dd7438c7ccdd06cde51c6603d8c35dec72448b09a16e877af64b9d1a" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.780272 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.798599 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.818634 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: E0130 16:20:42.819214 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-central-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819249 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-central-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: E0130 16:20:42.819268 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="sg-core" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819275 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="sg-core" Jan 30 16:20:42 crc kubenswrapper[4740]: E0130 16:20:42.819294 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-notification-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819300 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-notification-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: E0130 16:20:42.819321 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="proxy-httpd" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819328 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="proxy-httpd" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819551 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="sg-core" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819576 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="proxy-httpd" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819591 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-central-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.819601 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="232652ff-1455-468e-a812-d7699d96ffbe" containerName="ceilometer-notification-agent" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.822952 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.827376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.827478 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.835225 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.875970 4740 scope.go:117] "RemoveContainer" containerID="0163a3a330b4305964abd68af8f255615cee14b9162f7673cbc0ce731b026e11" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.916533 4740 scope.go:117] "RemoveContainer" containerID="c28b2a422df3802eb1cac5fbd264e20af55270cd79c15b78269818a8cdd2f16a" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.935936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.935991 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.936027 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.936047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.936064 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2xq\" (UniqueName: \"kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.936085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:42 crc kubenswrapper[4740]: I0130 16:20:42.936148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038119 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038188 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2xq\" (UniqueName: \"kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038292 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.038373 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.039802 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.040066 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.053792 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.057491 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.058917 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.068203 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.089462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2xq\" (UniqueName: \"kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq\") pod \"ceilometer-0\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.164033 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.371674 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232652ff-1455-468e-a812-d7699d96ffbe" path="/var/lib/kubelet/pods/232652ff-1455-468e-a812-d7699d96ffbe/volumes" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.379586 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba7c81b-ae84-4672-9108-001326602860" path="/var/lib/kubelet/pods/eba7c81b-ae84-4672-9108-001326602860/volumes" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.580905 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qjb6j"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.583849 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.606553 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qjb6j"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.668856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.668996 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfgls\" (UniqueName: \"kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.695394 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4t7gm"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.707377 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.712790 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4t7gm"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772055 4740 generic.go:334] "Generic (PLEG): container finished" podID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerID="d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999" exitCode=143 Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772065 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfgls\" (UniqueName: \"kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772448 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerDied","Data":"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999"} Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772491 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmkm\" (UniqueName: \"kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.772826 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.773692 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.800235 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72171841-1d34-41c7-80b8-d9ff3550e843","Type":"ContainerStarted","Data":"5cdbbb74ff5bfc23057a5254b453a1465d9bcf1c6706ef5b70b40275b3f8734a"} Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.800304 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72171841-1d34-41c7-80b8-d9ff3550e843","Type":"ContainerStarted","Data":"d0af63d377ab3fa8cd979230d867bfd66ae5560ed34148ef03e38bbb9948e09c"} Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.800928 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfgls\" (UniqueName: \"kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls\") pod \"nova-api-db-create-qjb6j\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.845369 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8pn9g"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.848545 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:43 crc kubenswrapper[4740]: W0130 16:20:43.848592 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a5e983_5d40_4774_831b_074b30893056.slice/crio-89d5a4d2655ed806394fa6d586e13b15801d98bd7558d8561e0beb701521d8e9 WatchSource:0}: Error finding container 89d5a4d2655ed806394fa6d586e13b15801d98bd7558d8561e0beb701521d8e9: Status 404 returned error can't find the container with id 89d5a4d2655ed806394fa6d586e13b15801d98bd7558d8561e0beb701521d8e9 Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.878679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmkm\" (UniqueName: \"kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.878748 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.879618 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.900872 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-69a4-account-create-update-tnzls"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.907263 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmkm\" (UniqueName: \"kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm\") pod \"nova-cell0-db-create-4t7gm\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.908660 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.912613 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.930910 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.936949 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8pn9g"] Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.981636 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh9x2\" (UniqueName: \"kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.981709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.981785 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.981850 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ksk\" (UniqueName: \"kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:43 crc kubenswrapper[4740]: I0130 16:20:43.995655 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69a4-account-create-update-tnzls"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.039967 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.040966 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.054105 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.054077771 podStartE2EDuration="3.054077771s" podCreationTimestamp="2026-01-30 16:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:43.829570794 +0000 UTC m=+1492.466633413" watchObservedRunningTime="2026-01-30 16:20:44.054077771 +0000 UTC m=+1492.691140380" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.085104 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh9x2\" (UniqueName: \"kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.085178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.085229 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.085280 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ksk\" (UniqueName: \"kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.086563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.090038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.115155 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ksk\" (UniqueName: \"kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk\") pod \"nova-api-69a4-account-create-update-tnzls\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.126056 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh9x2\" (UniqueName: \"kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2\") pod \"nova-cell1-db-create-8pn9g\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.178942 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-aad2-account-create-update-69g8c"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.215113 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.227510 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.243771 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.319520 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.319707 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655kc\" (UniqueName: \"kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.339333 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-aad2-account-create-update-69g8c"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.365244 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.422684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.423175 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-655kc\" (UniqueName: \"kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.424121 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.436944 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7dff-account-create-update-kxgjd"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.439320 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.441874 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.447093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-655kc\" (UniqueName: \"kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc\") pod \"nova-cell0-aad2-account-create-update-69g8c\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.462305 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7dff-account-create-update-kxgjd"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.532967 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.534196 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44w6\" (UniqueName: \"kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.638561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44w6\" (UniqueName: \"kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.639224 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.641997 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.659070 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.667060 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44w6\" (UniqueName: \"kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6\") pod \"nova-cell1-7dff-account-create-update-kxgjd\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.678900 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qjb6j"] Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.769950 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.833968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qjb6j" event={"ID":"0e660bb0-89ad-41e3-8b92-c57cdb00e15a","Type":"ContainerStarted","Data":"c7c11181335ea08f970060fca74de8a4dd6c7da096cc3c286d90d2c082d1e1b7"} Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.840964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerStarted","Data":"89d5a4d2655ed806394fa6d586e13b15801d98bd7558d8561e0beb701521d8e9"} Jan 30 16:20:44 crc kubenswrapper[4740]: I0130 16:20:44.890301 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4t7gm"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.075915 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8pn9g"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.143588 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.143940 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-log" containerID="cri-o://03fbd565c19c239e68a13cc8441944efbe25f635e25dd9da6511f4c2d193a2f8" gracePeriod=30 Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.144120 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-httpd" containerID="cri-o://0863b88005332061e30181135cd3f294739f8eb27b01ba7183a625bbd06f214e" gracePeriod=30 Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.217727 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-69a4-account-create-update-tnzls"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.473323 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-aad2-account-create-update-69g8c"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.539806 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.676166 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7dff-account-create-update-kxgjd"] Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.882082 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" event={"ID":"cbfcfe3a-498e-4604-a5ca-97a951b24573","Type":"ContainerStarted","Data":"378c36cdcd413ae40b069e71dbe93aea148a2d6002a0d341064f5972673d1056"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.892680 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4t7gm" event={"ID":"b99efeb7-303a-444e-8427-8c5613d8bc65","Type":"ContainerStarted","Data":"426d00a39ed7cbcfb518804feac673fe151df010066879a5ba2ca3c032ca7f2c"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.892736 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4t7gm" event={"ID":"b99efeb7-303a-444e-8427-8c5613d8bc65","Type":"ContainerStarted","Data":"48f016a81f79f7dbf2129b05b589fde9ca2d5aa5b962857773af99fa32fea4c1"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.912248 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-4t7gm" podStartSLOduration=2.912206117 podStartE2EDuration="2.912206117s" podCreationTimestamp="2026-01-30 16:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:45.909846049 +0000 UTC m=+1494.546908648" watchObservedRunningTime="2026-01-30 16:20:45.912206117 +0000 UTC m=+1494.549268716" Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.915969 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69a4-account-create-update-tnzls" event={"ID":"a7e2940b-8a2d-4865-a312-a5f3b783f0b0","Type":"ContainerStarted","Data":"ed468d6ec953cfa279f095debd149975043b9ad5131f401a04d04777f2d3bab7"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.916042 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69a4-account-create-update-tnzls" event={"ID":"a7e2940b-8a2d-4865-a312-a5f3b783f0b0","Type":"ContainerStarted","Data":"8b09c8af5bb4e8b04b69cfdbc0ee5301e15bb73cb38618c7719157b16f134281"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.923481 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qjb6j" event={"ID":"0e660bb0-89ad-41e3-8b92-c57cdb00e15a","Type":"ContainerStarted","Data":"1f8aadcf72df2f6526b5f684e17833291def67ee4547f64acc7f033b2810bef0"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.936175 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerStarted","Data":"9481db3ab5ce2e92a8d194ad7253b9bafea9ebe41be5ef14f099c3c545899672"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.950191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" event={"ID":"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1","Type":"ContainerStarted","Data":"48aa0d792b0fa4eefd5063c501d8f6125b2123bb49f1f935adc0dced2a1088c6"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.966116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8pn9g" event={"ID":"35414dff-66ea-4bb3-9a02-46c80f0822a8","Type":"ContainerStarted","Data":"9e68fc51326d234de6d59f66d4ebf21c6d69e87d179363a28d118bbe8c159d6a"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.966179 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8pn9g" event={"ID":"35414dff-66ea-4bb3-9a02-46c80f0822a8","Type":"ContainerStarted","Data":"82ae5a4053d054e32a03ed5ada4bbe565fb3a926cc89eca2a7006846da9703d1"} Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.997739 4740 generic.go:334] "Generic (PLEG): container finished" podID="961accad-8205-4289-9227-4ab2538ebdb1" containerID="03fbd565c19c239e68a13cc8441944efbe25f635e25dd9da6511f4c2d193a2f8" exitCode=143 Jan 30 16:20:45 crc kubenswrapper[4740]: I0130 16:20:45.998167 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerDied","Data":"03fbd565c19c239e68a13cc8441944efbe25f635e25dd9da6511f4c2d193a2f8"} Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.006237 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-69a4-account-create-update-tnzls" podStartSLOduration=3.006204748 podStartE2EDuration="3.006204748s" podCreationTimestamp="2026-01-30 16:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:45.940511749 +0000 UTC m=+1494.577574368" watchObservedRunningTime="2026-01-30 16:20:46.006204748 +0000 UTC m=+1494.643267347" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.062370 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qjb6j" podStartSLOduration=3.06231155 podStartE2EDuration="3.06231155s" podCreationTimestamp="2026-01-30 16:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:45.983528636 +0000 UTC m=+1494.620591225" watchObservedRunningTime="2026-01-30 16:20:46.06231155 +0000 UTC m=+1494.699374169" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.081687 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8pn9g" podStartSLOduration=3.081661289 podStartE2EDuration="3.081661289s" podCreationTimestamp="2026-01-30 16:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:46.007846519 +0000 UTC m=+1494.644909118" watchObservedRunningTime="2026-01-30 16:20:46.081661289 +0000 UTC m=+1494.718723888" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.090885 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" podStartSLOduration=3.090857428 podStartE2EDuration="3.090857428s" podCreationTimestamp="2026-01-30 16:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:46.057862489 +0000 UTC m=+1494.694925088" watchObservedRunningTime="2026-01-30 16:20:46.090857428 +0000 UTC m=+1494.727920027" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.855743 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/3.log" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.859868 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.870822 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991086 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991180 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle\") pod \"2701590d-93ff-476c-8ad7-fd118b873a3e\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991245 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991382 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config\") pod \"2701590d-93ff-476c-8ad7-fd118b873a3e\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991416 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991462 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsth\" (UniqueName: \"kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth\") pod \"2701590d-93ff-476c-8ad7-fd118b873a3e\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991483 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config\") pod \"2701590d-93ff-476c-8ad7-fd118b873a3e\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991596 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991687 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs\") pod \"2701590d-93ff-476c-8ad7-fd118b873a3e\" (UID: \"2701590d-93ff-476c-8ad7-fd118b873a3e\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991760 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991806 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.991847 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pgsz\" (UniqueName: \"kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz\") pod \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\" (UID: \"cbe1d10c-c40d-4b5a-bc95-10495060deb7\") " Jan 30 16:20:46 crc kubenswrapper[4740]: I0130 16:20:46.993159 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs" (OuterVolumeSpecName: "logs") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.000942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.010706 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts" (OuterVolumeSpecName: "scripts") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.029676 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2701590d-93ff-476c-8ad7-fd118b873a3e" (UID: "2701590d-93ff-476c-8ad7-fd118b873a3e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.031207 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz" (OuterVolumeSpecName: "kube-api-access-6pgsz") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "kube-api-access-6pgsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.052555 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth" (OuterVolumeSpecName: "kube-api-access-zvsth") pod "2701590d-93ff-476c-8ad7-fd118b873a3e" (UID: "2701590d-93ff-476c-8ad7-fd118b873a3e"). InnerVolumeSpecName "kube-api-access-zvsth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.066599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerStarted","Data":"af5314b6f0b62135ae2896b483ccac69838a0ace303fc52609ea010b711286ce"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.071703 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64bd6c9fd8-9p6nz_2701590d-93ff-476c-8ad7-fd118b873a3e/neutron-httpd/3.log" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.078659 4740 generic.go:334] "Generic (PLEG): container finished" podID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerID="7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.078801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.078840 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64bd6c9fd8-9p6nz" event={"ID":"2701590d-93ff-476c-8ad7-fd118b873a3e","Type":"ContainerDied","Data":"e0808ec587ab176304d03b38be5f4b051a412123383533c9828f1abfab5acdc3"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.078866 4740 scope.go:117] "RemoveContainer" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.079026 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64bd6c9fd8-9p6nz" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.095335 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.104516 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.104615 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pgsz\" (UniqueName: \"kubernetes.io/projected/cbe1d10c-c40d-4b5a-bc95-10495060deb7-kube-api-access-6pgsz\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.104683 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbe1d10c-c40d-4b5a-bc95-10495060deb7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.104738 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsth\" (UniqueName: \"kubernetes.io/projected/2701590d-93ff-476c-8ad7-fd118b873a3e-kube-api-access-zvsth\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.104829 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.110608 4740 generic.go:334] "Generic (PLEG): container finished" podID="6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" containerID="14e9679d70a170957aa301054b35813fde673c65a415212b93d0ee02212b7385" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.110786 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" event={"ID":"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1","Type":"ContainerDied","Data":"14e9679d70a170957aa301054b35813fde673c65a415212b93d0ee02212b7385"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.131714 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (OuterVolumeSpecName: "glance") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.150938 4740 generic.go:334] "Generic (PLEG): container finished" podID="a7e2940b-8a2d-4865-a312-a5f3b783f0b0" containerID="ed468d6ec953cfa279f095debd149975043b9ad5131f401a04d04777f2d3bab7" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.151069 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69a4-account-create-update-tnzls" event={"ID":"a7e2940b-8a2d-4865-a312-a5f3b783f0b0","Type":"ContainerDied","Data":"ed468d6ec953cfa279f095debd149975043b9ad5131f401a04d04777f2d3bab7"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.201713 4740 generic.go:334] "Generic (PLEG): container finished" podID="0e660bb0-89ad-41e3-8b92-c57cdb00e15a" containerID="1f8aadcf72df2f6526b5f684e17833291def67ee4547f64acc7f033b2810bef0" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.201876 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qjb6j" event={"ID":"0e660bb0-89ad-41e3-8b92-c57cdb00e15a","Type":"ContainerDied","Data":"1f8aadcf72df2f6526b5f684e17833291def67ee4547f64acc7f033b2810bef0"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.226871 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" " Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.227276 4740 generic.go:334] "Generic (PLEG): container finished" podID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerID="a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.227486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerDied","Data":"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.227528 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbe1d10c-c40d-4b5a-bc95-10495060deb7","Type":"ContainerDied","Data":"33151f480a33123807d56072ce6fdf8cc510da9894760a99d39a477935ab1ff3"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.227615 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.253709 4740 generic.go:334] "Generic (PLEG): container finished" podID="35414dff-66ea-4bb3-9a02-46c80f0822a8" containerID="9e68fc51326d234de6d59f66d4ebf21c6d69e87d179363a28d118bbe8c159d6a" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.253829 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8pn9g" event={"ID":"35414dff-66ea-4bb3-9a02-46c80f0822a8","Type":"ContainerDied","Data":"9e68fc51326d234de6d59f66d4ebf21c6d69e87d179363a28d118bbe8c159d6a"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.271217 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" event={"ID":"cbfcfe3a-498e-4604-a5ca-97a951b24573","Type":"ContainerStarted","Data":"286cb5759262c2cd4647cdd155e1c1f4c5a006dcc9984e672168f649fc36abf2"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.326868 4740 generic.go:334] "Generic (PLEG): container finished" podID="b99efeb7-303a-444e-8427-8c5613d8bc65" containerID="426d00a39ed7cbcfb518804feac673fe151df010066879a5ba2ca3c032ca7f2c" exitCode=0 Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.326948 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4t7gm" event={"ID":"b99efeb7-303a-444e-8427-8c5613d8bc65","Type":"ContainerDied","Data":"426d00a39ed7cbcfb518804feac673fe151df010066879a5ba2ca3c032ca7f2c"} Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.606715 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.607417 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4") on node "crc" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.660574 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.661153 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="22570e91-9697-47f0-81d5-c38551f883b2" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.715820 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.769864 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.795010 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.850546 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data" (OuterVolumeSpecName: "config-data") pod "cbe1d10c-c40d-4b5a-bc95-10495060deb7" (UID: "cbe1d10c-c40d-4b5a-bc95-10495060deb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.862528 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2701590d-93ff-476c-8ad7-fd118b873a3e" (UID: "2701590d-93ff-476c-8ad7-fd118b873a3e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.876117 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.876180 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe1d10c-c40d-4b5a-bc95-10495060deb7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.876189 4740 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.905302 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config" (OuterVolumeSpecName: "config") pod "2701590d-93ff-476c-8ad7-fd118b873a3e" (UID: "2701590d-93ff-476c-8ad7-fd118b873a3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.913031 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2701590d-93ff-476c-8ad7-fd118b873a3e" (UID: "2701590d-93ff-476c-8ad7-fd118b873a3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.978407 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:47 crc kubenswrapper[4740]: I0130 16:20:47.978448 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2701590d-93ff-476c-8ad7-fd118b873a3e-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.088212 4740 scope.go:117] "RemoveContainer" containerID="7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.134774 4740 scope.go:117] "RemoveContainer" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.138066 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c\": container with ID starting with f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c not found: ID does not exist" containerID="f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.138106 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c"} err="failed to get container status \"f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c\": rpc error: code = NotFound desc = could not find container \"f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c\": container with ID starting with f1155d8a7d45732d58aeb5d20660802b3fa13fe61fc11d2581ba23f697698a1c not found: ID does not exist" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.138131 4740 scope.go:117] "RemoveContainer" containerID="7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.139856 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45\": container with ID starting with 7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45 not found: ID does not exist" containerID="7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.139934 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45"} err="failed to get container status \"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45\": rpc error: code = NotFound desc = could not find container \"7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45\": container with ID starting with 7abaec2b72bdf860b5254d818d60cd0d4e646e3021cfbb20cdbbb9345920fa45 not found: ID does not exist" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.139984 4740 scope.go:117] "RemoveContainer" containerID="a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.165198 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.207442 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64bd6c9fd8-9p6nz"] Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.215806 4740 scope.go:117] "RemoveContainer" containerID="d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.238424 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.299631 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.326497 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327042 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327063 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327082 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327090 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327115 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327123 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327139 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327149 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327162 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327169 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327182 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-log" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327190 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-log" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.327213 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327218 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327459 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-log" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327479 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327486 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327497 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-api" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327506 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" containerName="glance-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327524 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.327955 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" containerName="neutron-httpd" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.328995 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.330547 4740 scope.go:117] "RemoveContainer" containerID="a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.334019 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.335338 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.335525 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.337111 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c\": container with ID starting with a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c not found: ID does not exist" containerID="a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.337149 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c"} err="failed to get container status \"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c\": rpc error: code = NotFound desc = could not find container \"a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c\": container with ID starting with a4ac531a6cdb4e6c4f05a402ceb2e5e54e9e95c5a133a19ad3450fd4326eff7c not found: ID does not exist" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.337183 4740 scope.go:117] "RemoveContainer" containerID="d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999" Jan 30 16:20:48 crc kubenswrapper[4740]: E0130 16:20:48.342554 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999\": container with ID starting with d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999 not found: ID does not exist" containerID="d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.342595 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999"} err="failed to get container status \"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999\": rpc error: code = NotFound desc = could not find container \"d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999\": container with ID starting with d1d686bf807529c4716101d8adcb470962b52d84b1ed183da0165cf83f455999 not found: ID does not exist" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.373549 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerStarted","Data":"ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043"} Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.424943 4740 generic.go:334] "Generic (PLEG): container finished" podID="cbfcfe3a-498e-4604-a5ca-97a951b24573" containerID="286cb5759262c2cd4647cdd155e1c1f4c5a006dcc9984e672168f649fc36abf2" exitCode=0 Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.425429 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" event={"ID":"cbfcfe3a-498e-4604-a5ca-97a951b24573","Type":"ContainerDied","Data":"286cb5759262c2cd4647cdd155e1c1f4c5a006dcc9984e672168f649fc36abf2"} Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495272 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495367 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-logs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495414 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495434 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495457 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495472 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495561 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9bx\" (UniqueName: \"kubernetes.io/projected/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-kube-api-access-9q9bx\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.495578 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600581 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9bx\" (UniqueName: \"kubernetes.io/projected/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-kube-api-access-9q9bx\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600636 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600842 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-logs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600912 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.600966 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.608942 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.611503 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.611889 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-logs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.631089 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.675525 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.675977 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d7b0e4f5e4f3aeb77e4c3eeab8492d1ed4d740072e82a0a742970a29e35f2749/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.687657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.687913 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9bx\" (UniqueName: \"kubernetes.io/projected/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-kube-api-access-9q9bx\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.722105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3b49e9-60b0-4090-a703-acbc21b9b6b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.777757 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8617d3cd-cf72-4b98-b61f-a4e48ee558f4\") pod \"glance-default-external-api-0\" (UID: \"0e3b49e9-60b0-4090-a703-acbc21b9b6b0\") " pod="openstack/glance-default-external-api-0" Jan 30 16:20:48 crc kubenswrapper[4740]: I0130 16:20:48.966105 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.117107 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.230689 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts\") pod \"35414dff-66ea-4bb3-9a02-46c80f0822a8\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.231024 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh9x2\" (UniqueName: \"kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2\") pod \"35414dff-66ea-4bb3-9a02-46c80f0822a8\" (UID: \"35414dff-66ea-4bb3-9a02-46c80f0822a8\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.232143 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35414dff-66ea-4bb3-9a02-46c80f0822a8" (UID: "35414dff-66ea-4bb3-9a02-46c80f0822a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.239148 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2" (OuterVolumeSpecName: "kube-api-access-jh9x2") pod "35414dff-66ea-4bb3-9a02-46c80f0822a8" (UID: "35414dff-66ea-4bb3-9a02-46c80f0822a8"). InnerVolumeSpecName "kube-api-access-jh9x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.334660 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh9x2\" (UniqueName: \"kubernetes.io/projected/35414dff-66ea-4bb3-9a02-46c80f0822a8-kube-api-access-jh9x2\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.335053 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35414dff-66ea-4bb3-9a02-46c80f0822a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.361840 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2701590d-93ff-476c-8ad7-fd118b873a3e" path="/var/lib/kubelet/pods/2701590d-93ff-476c-8ad7-fd118b873a3e/volumes" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.365675 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe1d10c-c40d-4b5a-bc95-10495060deb7" path="/var/lib/kubelet/pods/cbe1d10c-c40d-4b5a-bc95-10495060deb7/volumes" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.536339 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.537230 4740 generic.go:334] "Generic (PLEG): container finished" podID="961accad-8205-4289-9227-4ab2538ebdb1" containerID="0863b88005332061e30181135cd3f294739f8eb27b01ba7183a625bbd06f214e" exitCode=0 Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.537317 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerDied","Data":"0863b88005332061e30181135cd3f294739f8eb27b01ba7183a625bbd06f214e"} Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.541600 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" event={"ID":"cbfcfe3a-498e-4604-a5ca-97a951b24573","Type":"ContainerDied","Data":"378c36cdcd413ae40b069e71dbe93aea148a2d6002a0d341064f5972673d1056"} Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.541625 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378c36cdcd413ae40b069e71dbe93aea148a2d6002a0d341064f5972673d1056" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.541675 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7dff-account-create-update-kxgjd" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.545124 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8pn9g" event={"ID":"35414dff-66ea-4bb3-9a02-46c80f0822a8","Type":"ContainerDied","Data":"82ae5a4053d054e32a03ed5ada4bbe565fb3a926cc89eca2a7006846da9703d1"} Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.545155 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82ae5a4053d054e32a03ed5ada4bbe565fb3a926cc89eca2a7006846da9703d1" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.545206 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8pn9g" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.630144 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.640251 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44w6\" (UniqueName: \"kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6\") pod \"cbfcfe3a-498e-4604-a5ca-97a951b24573\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.640574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts\") pod \"cbfcfe3a-498e-4604-a5ca-97a951b24573\" (UID: \"cbfcfe3a-498e-4604-a5ca-97a951b24573\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.641593 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbfcfe3a-498e-4604-a5ca-97a951b24573" (UID: "cbfcfe3a-498e-4604-a5ca-97a951b24573"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.672465 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6" (OuterVolumeSpecName: "kube-api-access-w44w6") pod "cbfcfe3a-498e-4604-a5ca-97a951b24573" (UID: "cbfcfe3a-498e-4604-a5ca-97a951b24573"). InnerVolumeSpecName "kube-api-access-w44w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.675588 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.745561 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts\") pod \"b99efeb7-303a-444e-8427-8c5613d8bc65\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.745649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfgls\" (UniqueName: \"kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls\") pod \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.745721 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts\") pod \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\" (UID: \"0e660bb0-89ad-41e3-8b92-c57cdb00e15a\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.745947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmkm\" (UniqueName: \"kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm\") pod \"b99efeb7-303a-444e-8427-8c5613d8bc65\" (UID: \"b99efeb7-303a-444e-8427-8c5613d8bc65\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.746509 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w44w6\" (UniqueName: \"kubernetes.io/projected/cbfcfe3a-498e-4604-a5ca-97a951b24573-kube-api-access-w44w6\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.746526 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfcfe3a-498e-4604-a5ca-97a951b24573-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.747697 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e660bb0-89ad-41e3-8b92-c57cdb00e15a" (UID: "0e660bb0-89ad-41e3-8b92-c57cdb00e15a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.748191 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b99efeb7-303a-444e-8427-8c5613d8bc65" (UID: "b99efeb7-303a-444e-8427-8c5613d8bc65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.763713 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls" (OuterVolumeSpecName: "kube-api-access-xfgls") pod "0e660bb0-89ad-41e3-8b92-c57cdb00e15a" (UID: "0e660bb0-89ad-41e3-8b92-c57cdb00e15a"). InnerVolumeSpecName "kube-api-access-xfgls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.768599 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm" (OuterVolumeSpecName: "kube-api-access-9wmkm") pod "b99efeb7-303a-444e-8427-8c5613d8bc65" (UID: "b99efeb7-303a-444e-8427-8c5613d8bc65"). InnerVolumeSpecName "kube-api-access-9wmkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.798176 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.849321 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b99efeb7-303a-444e-8427-8c5613d8bc65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.849377 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfgls\" (UniqueName: \"kubernetes.io/projected/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-kube-api-access-xfgls\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.849389 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e660bb0-89ad-41e3-8b92-c57cdb00e15a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.849399 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmkm\" (UniqueName: \"kubernetes.io/projected/b99efeb7-303a-444e-8427-8c5613d8bc65-kube-api-access-9wmkm\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.853479 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.950991 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8ksk\" (UniqueName: \"kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk\") pod \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.951079 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts\") pod \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\" (UID: \"a7e2940b-8a2d-4865-a312-a5f3b783f0b0\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.951139 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-655kc\" (UniqueName: \"kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc\") pod \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.951313 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts\") pod \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\" (UID: \"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1\") " Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.951990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7e2940b-8a2d-4865-a312-a5f3b783f0b0" (UID: "a7e2940b-8a2d-4865-a312-a5f3b783f0b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.955664 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" (UID: "6eda7b5b-a45e-4aaa-a107-1b602beb6ed1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.957527 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk" (OuterVolumeSpecName: "kube-api-access-l8ksk") pod "a7e2940b-8a2d-4865-a312-a5f3b783f0b0" (UID: "a7e2940b-8a2d-4865-a312-a5f3b783f0b0"). InnerVolumeSpecName "kube-api-access-l8ksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:49 crc kubenswrapper[4740]: I0130 16:20:49.971658 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc" (OuterVolumeSpecName: "kube-api-access-655kc") pod "6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" (UID: "6eda7b5b-a45e-4aaa-a107-1b602beb6ed1"). InnerVolumeSpecName "kube-api-access-655kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.056887 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.056970 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8ksk\" (UniqueName: \"kubernetes.io/projected/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-kube-api-access-l8ksk\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.056993 4740 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7e2940b-8a2d-4865-a312-a5f3b783f0b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.057005 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-655kc\" (UniqueName: \"kubernetes.io/projected/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1-kube-api-access-655kc\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.305785 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367314 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367395 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367462 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367488 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367549 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br5k6\" (UniqueName: \"kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.367883 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.368027 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.368098 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data\") pod \"961accad-8205-4289-9227-4ab2538ebdb1\" (UID: \"961accad-8205-4289-9227-4ab2538ebdb1\") " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.368698 4740 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.368990 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs" (OuterVolumeSpecName: "logs") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.373666 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts" (OuterVolumeSpecName: "scripts") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.381064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6" (OuterVolumeSpecName: "kube-api-access-br5k6") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "kube-api-access-br5k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.457625 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (OuterVolumeSpecName: "glance") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.473058 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" " Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.473108 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/961accad-8205-4289-9227-4ab2538ebdb1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.473122 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.473135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br5k6\" (UniqueName: \"kubernetes.io/projected/961accad-8205-4289-9227-4ab2538ebdb1-kube-api-access-br5k6\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.498125 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.500962 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.519647 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.519956 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3") on node "crc" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.542787 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.575333 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.575390 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.575412 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.579091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4t7gm" event={"ID":"b99efeb7-303a-444e-8427-8c5613d8bc65","Type":"ContainerDied","Data":"48f016a81f79f7dbf2129b05b589fde9ca2d5aa5b962857773af99fa32fea4c1"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.579178 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f016a81f79f7dbf2129b05b589fde9ca2d5aa5b962857773af99fa32fea4c1" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.579231 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4t7gm" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.590803 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data" (OuterVolumeSpecName: "config-data") pod "961accad-8205-4289-9227-4ab2538ebdb1" (UID: "961accad-8205-4289-9227-4ab2538ebdb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.590920 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-69a4-account-create-update-tnzls" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.590810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-69a4-account-create-update-tnzls" event={"ID":"a7e2940b-8a2d-4865-a312-a5f3b783f0b0","Type":"ContainerDied","Data":"8b09c8af5bb4e8b04b69cfdbc0ee5301e15bb73cb38618c7719157b16f134281"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.591056 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b09c8af5bb4e8b04b69cfdbc0ee5301e15bb73cb38618c7719157b16f134281" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.597742 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qjb6j" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.597737 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qjb6j" event={"ID":"0e660bb0-89ad-41e3-8b92-c57cdb00e15a","Type":"ContainerDied","Data":"c7c11181335ea08f970060fca74de8a4dd6c7da096cc3c286d90d2c082d1e1b7"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.597869 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c11181335ea08f970060fca74de8a4dd6c7da096cc3c286d90d2c082d1e1b7" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.607330 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e3b49e9-60b0-4090-a703-acbc21b9b6b0","Type":"ContainerStarted","Data":"284c42befb4cff2444541ac917f51d305d06b91d5a61d13e425ad57551e919bc"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.620602 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" event={"ID":"6eda7b5b-a45e-4aaa-a107-1b602beb6ed1","Type":"ContainerDied","Data":"48aa0d792b0fa4eefd5063c501d8f6125b2123bb49f1f935adc0dced2a1088c6"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.620660 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48aa0d792b0fa4eefd5063c501d8f6125b2123bb49f1f935adc0dced2a1088c6" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.620629 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad2-account-create-update-69g8c" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.625834 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"961accad-8205-4289-9227-4ab2538ebdb1","Type":"ContainerDied","Data":"0496d16eefeb402932dd3365c16a286b0a4060ed68454e8bc53afc91408b80f4"} Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.625901 4740 scope.go:117] "RemoveContainer" containerID="0863b88005332061e30181135cd3f294739f8eb27b01ba7183a625bbd06f214e" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.626141 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.677276 4740 scope.go:117] "RemoveContainer" containerID="03fbd565c19c239e68a13cc8441944efbe25f635e25dd9da6511f4c2d193a2f8" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.679655 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/961accad-8205-4289-9227-4ab2538ebdb1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.704005 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.737043 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755215 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755809 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e660bb0-89ad-41e3-8b92-c57cdb00e15a" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755832 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e660bb0-89ad-41e3-8b92-c57cdb00e15a" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755846 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-httpd" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755851 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-httpd" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755879 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99efeb7-303a-444e-8427-8c5613d8bc65" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755885 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99efeb7-303a-444e-8427-8c5613d8bc65" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755907 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfcfe3a-498e-4604-a5ca-97a951b24573" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755914 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfcfe3a-498e-4604-a5ca-97a951b24573" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755929 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e2940b-8a2d-4865-a312-a5f3b783f0b0" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755937 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e2940b-8a2d-4865-a312-a5f3b783f0b0" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755955 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35414dff-66ea-4bb3-9a02-46c80f0822a8" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755962 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="35414dff-66ea-4bb3-9a02-46c80f0822a8" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755971 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-log" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.755978 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-log" Jan 30 16:20:50 crc kubenswrapper[4740]: E0130 16:20:50.755997 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756004 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756266 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756299 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-log" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756313 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99efeb7-303a-444e-8427-8c5613d8bc65" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756324 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="35414dff-66ea-4bb3-9a02-46c80f0822a8" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756332 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="961accad-8205-4289-9227-4ab2538ebdb1" containerName="glance-httpd" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756338 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e660bb0-89ad-41e3-8b92-c57cdb00e15a" containerName="mariadb-database-create" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756363 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e2940b-8a2d-4865-a312-a5f3b783f0b0" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.756373 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbfcfe3a-498e-4604-a5ca-97a951b24573" containerName="mariadb-account-create-update" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.757893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.763656 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.763978 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.784019 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886277 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886430 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886479 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886580 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886615 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886666 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.886899 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdxt8\" (UniqueName: \"kubernetes.io/projected/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-kube-api-access-tdxt8\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.994041 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdxt8\" (UniqueName: \"kubernetes.io/projected/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-kube-api-access-tdxt8\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.995085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.995213 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.995343 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.995578 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.996114 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-logs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.996983 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.997682 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.997877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:50 crc kubenswrapper[4740]: I0130 16:20:50.998660 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.001846 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.001937 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/60fa5ce19fcc327994c70afc2a90d04a285bdf3051c4029a47293957e337f4a5/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.003460 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.004662 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.005882 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.015541 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdxt8\" (UniqueName: \"kubernetes.io/projected/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-kube-api-access-tdxt8\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.034380 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d01b7b2-9f95-43e7-abae-1b1acb9c817b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.093257 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b30a6b6-46f4-42d8-92dd-2f91ccc73ce3\") pod \"glance-default-internal-api-0\" (UID: \"3d01b7b2-9f95-43e7-abae-1b1acb9c817b\") " pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.103337 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.356939 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961accad-8205-4289-9227-4ab2538ebdb1" path="/var/lib/kubelet/pods/961accad-8205-4289-9227-4ab2538ebdb1/volumes" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.647520 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerStarted","Data":"f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784"} Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.648312 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-central-agent" containerID="cri-o://9481db3ab5ce2e92a8d194ad7253b9bafea9ebe41be5ef14f099c3c545899672" gracePeriod=30 Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.648616 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="proxy-httpd" containerID="cri-o://f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784" gracePeriod=30 Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.648679 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="sg-core" containerID="cri-o://ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043" gracePeriod=30 Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.648732 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-notification-agent" containerID="cri-o://af5314b6f0b62135ae2896b483ccac69838a0ace303fc52609ea010b711286ce" gracePeriod=30 Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.648275 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.688919 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.453274068 podStartE2EDuration="9.688892965s" podCreationTimestamp="2026-01-30 16:20:42 +0000 UTC" firstStartedPulling="2026-01-30 16:20:43.861553167 +0000 UTC m=+1492.498615766" lastFinishedPulling="2026-01-30 16:20:50.097172064 +0000 UTC m=+1498.734234663" observedRunningTime="2026-01-30 16:20:51.675183855 +0000 UTC m=+1500.312246454" watchObservedRunningTime="2026-01-30 16:20:51.688892965 +0000 UTC m=+1500.325955554" Jan 30 16:20:51 crc kubenswrapper[4740]: I0130 16:20:51.860463 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 16:20:51 crc kubenswrapper[4740]: E0130 16:20:51.957185 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a5e983_5d40_4774_831b_074b30893056.slice/crio-conmon-ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a5e983_5d40_4774_831b_074b30893056.slice/crio-ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7a5e983_5d40_4774_831b_074b30893056.slice/crio-conmon-f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.559568 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:20:52 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:20:52 crc kubenswrapper[4740]: > Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.676626 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e3b49e9-60b0-4090-a703-acbc21b9b6b0","Type":"ContainerStarted","Data":"4aaca6da8186a703092b5c458a9fcd472de6d48cad78a6f90172dc91e270dfee"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.676993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e3b49e9-60b0-4090-a703-acbc21b9b6b0","Type":"ContainerStarted","Data":"6ab24ddd636011ad40763bfe932863f6ee16b8e435cc86f9e3c800d3fad8b091"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.680475 4740 generic.go:334] "Generic (PLEG): container finished" podID="e7a5e983-5d40-4774-831b-074b30893056" containerID="f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784" exitCode=0 Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.680670 4740 generic.go:334] "Generic (PLEG): container finished" podID="e7a5e983-5d40-4774-831b-074b30893056" containerID="ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043" exitCode=2 Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.680745 4740 generic.go:334] "Generic (PLEG): container finished" podID="e7a5e983-5d40-4774-831b-074b30893056" containerID="af5314b6f0b62135ae2896b483ccac69838a0ace303fc52609ea010b711286ce" exitCode=0 Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.680569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerDied","Data":"f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.681390 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerDied","Data":"ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.681478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerDied","Data":"af5314b6f0b62135ae2896b483ccac69838a0ace303fc52609ea010b711286ce"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.683445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d01b7b2-9f95-43e7-abae-1b1acb9c817b","Type":"ContainerStarted","Data":"882ac8a6a4c0b8361dc88ea20f45ee81fc9c679c258cf13f625fc012a2a56dba"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.683492 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d01b7b2-9f95-43e7-abae-1b1acb9c817b","Type":"ContainerStarted","Data":"329be2e622f9428f3a554beced998877194b3b84c5675595697c17035631bbed"} Jan 30 16:20:52 crc kubenswrapper[4740]: I0130 16:20:52.706412 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.706381825 podStartE2EDuration="4.706381825s" podCreationTimestamp="2026-01-30 16:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:52.701253598 +0000 UTC m=+1501.338316197" watchObservedRunningTime="2026-01-30 16:20:52.706381825 +0000 UTC m=+1501.343444424" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.499124 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kktg4"] Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.501524 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.505506 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ks2ch" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.505747 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.505898 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.521705 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kktg4"] Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.603563 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.603777 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.603850 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.604437 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sl9p\" (UniqueName: \"kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.705770 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl9p\" (UniqueName: \"kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.705858 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.705902 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.705935 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.713980 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.715187 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3d01b7b2-9f95-43e7-abae-1b1acb9c817b","Type":"ContainerStarted","Data":"01decb971d74ed5b87f0250fd1cd58d95d49036fd97d5356f15ded99ce4ff18f"} Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.717680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.718534 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.734053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl9p\" (UniqueName: \"kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p\") pod \"nova-cell0-conductor-db-sync-kktg4\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.750294 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.750267429 podStartE2EDuration="4.750267429s" podCreationTimestamp="2026-01-30 16:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:20:54.736763554 +0000 UTC m=+1503.373826153" watchObservedRunningTime="2026-01-30 16:20:54.750267429 +0000 UTC m=+1503.387330028" Jan 30 16:20:54 crc kubenswrapper[4740]: I0130 16:20:54.840028 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:20:55 crc kubenswrapper[4740]: I0130 16:20:55.454735 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kktg4"] Jan 30 16:20:55 crc kubenswrapper[4740]: I0130 16:20:55.731123 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kktg4" event={"ID":"7b5a28f7-a3dd-4812-af52-97f58641116a","Type":"ContainerStarted","Data":"a218bec101565a9eb83d2754beb5818982fa6b2a6c4fc399948759fb6a2d16a1"} Jan 30 16:20:58 crc kubenswrapper[4740]: I0130 16:20:58.967069 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:20:58 crc kubenswrapper[4740]: I0130 16:20:58.967953 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 16:20:59 crc kubenswrapper[4740]: I0130 16:20:59.015333 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:20:59 crc kubenswrapper[4740]: I0130 16:20:59.048167 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 16:20:59 crc kubenswrapper[4740]: I0130 16:20:59.793953 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:20:59 crc kubenswrapper[4740]: I0130 16:20:59.794037 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.103836 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.104327 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.161368 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.173867 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.845048 4740 generic.go:334] "Generic (PLEG): container finished" podID="e7a5e983-5d40-4774-831b-074b30893056" containerID="9481db3ab5ce2e92a8d194ad7253b9bafea9ebe41be5ef14f099c3c545899672" exitCode=0 Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.846229 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerDied","Data":"9481db3ab5ce2e92a8d194ad7253b9bafea9ebe41be5ef14f099c3c545899672"} Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.846480 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:01 crc kubenswrapper[4740]: I0130 16:21:01.846561 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:02 crc kubenswrapper[4740]: I0130 16:21:02.559439 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:21:02 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:21:02 crc kubenswrapper[4740]: > Jan 30 16:21:10 crc kubenswrapper[4740]: E0130 16:21:10.068100 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 30 16:21:10 crc kubenswrapper[4740]: E0130 16:21:10.069181 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5sl9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-kktg4_openstack(7b5a28f7-a3dd-4812-af52-97f58641116a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:21:10 crc kubenswrapper[4740]: E0130 16:21:10.070925 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-kktg4" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.578126 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599264 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599339 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599494 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp2xq\" (UniqueName: \"kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599708 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data\") pod \"e7a5e983-5d40-4774-831b-074b30893056\" (UID: \"e7a5e983-5d40-4774-831b-074b30893056\") " Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.599942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.600068 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.600723 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.600749 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a5e983-5d40-4774-831b-074b30893056-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.609292 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq" (OuterVolumeSpecName: "kube-api-access-mp2xq") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "kube-api-access-mp2xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.613665 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts" (OuterVolumeSpecName: "scripts") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.694071 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.705025 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp2xq\" (UniqueName: \"kubernetes.io/projected/e7a5e983-5d40-4774-831b-074b30893056-kube-api-access-mp2xq\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.705074 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.705093 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.733571 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.799451 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data" (OuterVolumeSpecName: "config-data") pod "e7a5e983-5d40-4774-831b-074b30893056" (UID: "e7a5e983-5d40-4774-831b-074b30893056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.807375 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.807421 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a5e983-5d40-4774-831b-074b30893056-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.982235 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.982163 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a5e983-5d40-4774-831b-074b30893056","Type":"ContainerDied","Data":"89d5a4d2655ed806394fa6d586e13b15801d98bd7558d8561e0beb701521d8e9"} Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.982475 4740 scope.go:117] "RemoveContainer" containerID="f132bd130a3ac438e6a8f0912ccc8f5ef2e026fc5dd61851fb7ed14ccda32784" Jan 30 16:21:10 crc kubenswrapper[4740]: E0130 16:21:10.984462 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-kktg4" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.989249 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.989408 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.994744 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:21:10 crc kubenswrapper[4740]: I0130 16:21:10.994917 4740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.000002 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.010610 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.022677 4740 scope.go:117] "RemoveContainer" containerID="ab009bed225a21a3b3fbc9cab3e1047d2ba012530f73099e5d6b5cae4bd66043" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.038812 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.048432 4740 scope.go:117] "RemoveContainer" containerID="af5314b6f0b62135ae2896b483ccac69838a0ace303fc52609ea010b711286ce" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.050385 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.108614 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:11 crc kubenswrapper[4740]: E0130 16:21:11.109468 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="proxy-httpd" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109508 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="proxy-httpd" Jan 30 16:21:11 crc kubenswrapper[4740]: E0130 16:21:11.109532 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="sg-core" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109553 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="sg-core" Jan 30 16:21:11 crc kubenswrapper[4740]: E0130 16:21:11.109577 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-notification-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109587 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-notification-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: E0130 16:21:11.109616 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-central-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109625 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-central-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109915 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="proxy-httpd" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109942 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-central-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109958 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="sg-core" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.109967 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a5e983-5d40-4774-831b-074b30893056" containerName="ceilometer-notification-agent" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.112871 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.115366 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.150876 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.168052 4740 scope.go:117] "RemoveContainer" containerID="9481db3ab5ce2e92a8d194ad7253b9bafea9ebe41be5ef14f099c3c545899672" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.288341 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.322998 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.323111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.323165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.323613 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh99r\" (UniqueName: \"kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.323739 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.324029 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.324334 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.361271 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a5e983-5d40-4774-831b-074b30893056" path="/var/lib/kubelet/pods/e7a5e983-5d40-4774-831b-074b30893056/volumes" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.436729 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.436971 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.437090 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.437219 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.437327 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.437903 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.438518 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.438633 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh99r\" (UniqueName: \"kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.438775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.444977 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.447801 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.450043 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.456368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.482244 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh99r\" (UniqueName: \"kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r\") pod \"ceilometer-0\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " pod="openstack/ceilometer-0" Jan 30 16:21:11 crc kubenswrapper[4740]: I0130 16:21:11.500832 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:12 crc kubenswrapper[4740]: I0130 16:21:12.074775 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:12 crc kubenswrapper[4740]: I0130 16:21:12.558140 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:21:12 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:21:12 crc kubenswrapper[4740]: > Jan 30 16:21:13 crc kubenswrapper[4740]: I0130 16:21:13.006719 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerStarted","Data":"819ccf63e8751b0e3410cb175bd5542dd04af677e37089d4ca8922160109ec89"} Jan 30 16:21:15 crc kubenswrapper[4740]: I0130 16:21:15.998709 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.197:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:21:15 crc kubenswrapper[4740]: I0130 16:21:15.998825 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.197:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:21:16 crc kubenswrapper[4740]: I0130 16:21:16.043803 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerStarted","Data":"dfa5ff0acd8dba53811f2f310f74d6293a61c5c597ee2fbd116251f113c5ca46"} Jan 30 16:21:17 crc kubenswrapper[4740]: I0130 16:21:17.059324 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerStarted","Data":"52dc443797e52a9589293de50eff40f4b2f0a00f1625015eed073243e2fe621e"} Jan 30 16:21:19 crc kubenswrapper[4740]: I0130 16:21:19.087467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerStarted","Data":"d7f52a8b68ff305363d6f964cd391476e846c4f5f0660f5d619ceed73b535f6c"} Jan 30 16:21:19 crc kubenswrapper[4740]: I0130 16:21:19.915297 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 30 16:21:22 crc kubenswrapper[4740]: I0130 16:21:22.907796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerStarted","Data":"648f6a396c2e074c5a40b4f5af2f52b0336460db28f331ac32e3eda3e455c0ee"} Jan 30 16:21:22 crc kubenswrapper[4740]: I0130 16:21:22.909399 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:21:22 crc kubenswrapper[4740]: I0130 16:21:22.953620 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:21:22 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:21:22 crc kubenswrapper[4740]: > Jan 30 16:21:22 crc kubenswrapper[4740]: I0130 16:21:22.971430 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.190556298 podStartE2EDuration="11.97140215s" podCreationTimestamp="2026-01-30 16:21:11 +0000 UTC" firstStartedPulling="2026-01-30 16:21:12.111764279 +0000 UTC m=+1520.748826878" lastFinishedPulling="2026-01-30 16:21:20.892610141 +0000 UTC m=+1529.529672730" observedRunningTime="2026-01-30 16:21:22.937470419 +0000 UTC m=+1531.574533028" watchObservedRunningTime="2026-01-30 16:21:22.97140215 +0000 UTC m=+1531.608464749" Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.486309 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.487094 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-central-agent" containerID="cri-o://dfa5ff0acd8dba53811f2f310f74d6293a61c5c597ee2fbd116251f113c5ca46" gracePeriod=30 Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.487120 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="proxy-httpd" containerID="cri-o://648f6a396c2e074c5a40b4f5af2f52b0336460db28f331ac32e3eda3e455c0ee" gracePeriod=30 Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.487177 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="sg-core" containerID="cri-o://d7f52a8b68ff305363d6f964cd391476e846c4f5f0660f5d619ceed73b535f6c" gracePeriod=30 Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.487177 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-notification-agent" containerID="cri-o://52dc443797e52a9589293de50eff40f4b2f0a00f1625015eed073243e2fe621e" gracePeriod=30 Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.960554 4740 generic.go:334] "Generic (PLEG): container finished" podID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerID="d7f52a8b68ff305363d6f964cd391476e846c4f5f0660f5d619ceed73b535f6c" exitCode=2 Jan 30 16:21:25 crc kubenswrapper[4740]: I0130 16:21:25.960630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerDied","Data":"d7f52a8b68ff305363d6f964cd391476e846c4f5f0660f5d619ceed73b535f6c"} Jan 30 16:21:26 crc kubenswrapper[4740]: I0130 16:21:26.978788 4740 generic.go:334] "Generic (PLEG): container finished" podID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerID="648f6a396c2e074c5a40b4f5af2f52b0336460db28f331ac32e3eda3e455c0ee" exitCode=0 Jan 30 16:21:26 crc kubenswrapper[4740]: I0130 16:21:26.979272 4740 generic.go:334] "Generic (PLEG): container finished" podID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerID="52dc443797e52a9589293de50eff40f4b2f0a00f1625015eed073243e2fe621e" exitCode=0 Jan 30 16:21:26 crc kubenswrapper[4740]: I0130 16:21:26.978880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerDied","Data":"648f6a396c2e074c5a40b4f5af2f52b0336460db28f331ac32e3eda3e455c0ee"} Jan 30 16:21:26 crc kubenswrapper[4740]: I0130 16:21:26.979328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerDied","Data":"52dc443797e52a9589293de50eff40f4b2f0a00f1625015eed073243e2fe621e"} Jan 30 16:21:28 crc kubenswrapper[4740]: I0130 16:21:28.004013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kktg4" event={"ID":"7b5a28f7-a3dd-4812-af52-97f58641116a","Type":"ContainerStarted","Data":"79c79f0e0908c57dfcdcea35437ea56783f95ff00ddd358c149b22ded4d10267"} Jan 30 16:21:28 crc kubenswrapper[4740]: I0130 16:21:28.031207 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kktg4" podStartSLOduration=2.8469799719999997 podStartE2EDuration="34.031180889s" podCreationTimestamp="2026-01-30 16:20:54 +0000 UTC" firstStartedPulling="2026-01-30 16:20:55.464517591 +0000 UTC m=+1504.101580190" lastFinishedPulling="2026-01-30 16:21:26.648718518 +0000 UTC m=+1535.285781107" observedRunningTime="2026-01-30 16:21:28.025635851 +0000 UTC m=+1536.662698470" watchObservedRunningTime="2026-01-30 16:21:28.031180889 +0000 UTC m=+1536.668243498" Jan 30 16:21:32 crc kubenswrapper[4740]: I0130 16:21:32.554528 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:21:32 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:21:32 crc kubenswrapper[4740]: > Jan 30 16:21:33 crc kubenswrapper[4740]: I0130 16:21:33.074666 4740 generic.go:334] "Generic (PLEG): container finished" podID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerID="dfa5ff0acd8dba53811f2f310f74d6293a61c5c597ee2fbd116251f113c5ca46" exitCode=0 Jan 30 16:21:33 crc kubenswrapper[4740]: I0130 16:21:33.074713 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerDied","Data":"dfa5ff0acd8dba53811f2f310f74d6293a61c5c597ee2fbd116251f113c5ca46"} Jan 30 16:21:33 crc kubenswrapper[4740]: I0130 16:21:33.901574 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.082748 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.083336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.083938 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084014 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084076 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084137 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh99r\" (UniqueName: \"kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084173 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts\") pod \"094e10ac-a4d2-43c6-baa4-f90a1e062382\" (UID: \"094e10ac-a4d2-43c6-baa4-f90a1e062382\") " Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084532 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.084584 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.085196 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.085225 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/094e10ac-a4d2-43c6-baa4-f90a1e062382-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.092094 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts" (OuterVolumeSpecName: "scripts") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.093054 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r" (OuterVolumeSpecName: "kube-api-access-hh99r") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "kube-api-access-hh99r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.093478 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"094e10ac-a4d2-43c6-baa4-f90a1e062382","Type":"ContainerDied","Data":"819ccf63e8751b0e3410cb175bd5542dd04af677e37089d4ca8922160109ec89"} Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.093553 4740 scope.go:117] "RemoveContainer" containerID="648f6a396c2e074c5a40b4f5af2f52b0336460db28f331ac32e3eda3e455c0ee" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.093562 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.124894 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.188410 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.188827 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh99r\" (UniqueName: \"kubernetes.io/projected/094e10ac-a4d2-43c6-baa4-f90a1e062382-kube-api-access-hh99r\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.188894 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.199258 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.220557 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data" (OuterVolumeSpecName: "config-data") pod "094e10ac-a4d2-43c6-baa4-f90a1e062382" (UID: "094e10ac-a4d2-43c6-baa4-f90a1e062382"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.295959 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.296016 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094e10ac-a4d2-43c6-baa4-f90a1e062382-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.341147 4740 scope.go:117] "RemoveContainer" containerID="d7f52a8b68ff305363d6f964cd391476e846c4f5f0660f5d619ceed73b535f6c" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.362707 4740 scope.go:117] "RemoveContainer" containerID="52dc443797e52a9589293de50eff40f4b2f0a00f1625015eed073243e2fe621e" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.384166 4740 scope.go:117] "RemoveContainer" containerID="dfa5ff0acd8dba53811f2f310f74d6293a61c5c597ee2fbd116251f113c5ca46" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.459066 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.473529 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.486451 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:34 crc kubenswrapper[4740]: E0130 16:21:34.487044 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="proxy-httpd" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487060 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="proxy-httpd" Jan 30 16:21:34 crc kubenswrapper[4740]: E0130 16:21:34.487103 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-notification-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487109 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-notification-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: E0130 16:21:34.487123 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="sg-core" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487129 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="sg-core" Jan 30 16:21:34 crc kubenswrapper[4740]: E0130 16:21:34.487140 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-central-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487146 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-central-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487328 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="proxy-httpd" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487457 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="sg-core" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487467 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-notification-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.487477 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" containerName="ceilometer-central-agent" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.490103 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.493602 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.493977 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.505725 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqtxj\" (UniqueName: \"kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.505812 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.505975 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.506502 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.506927 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.506656 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.506972 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.507130 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.608819 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqtxj\" (UniqueName: \"kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.608884 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.608979 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609013 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609861 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.609994 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.619311 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.619379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.619674 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.619718 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.629217 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqtxj\" (UniqueName: \"kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj\") pod \"ceilometer-0\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " pod="openstack/ceilometer-0" Jan 30 16:21:34 crc kubenswrapper[4740]: I0130 16:21:34.853610 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:21:35 crc kubenswrapper[4740]: I0130 16:21:35.350838 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="094e10ac-a4d2-43c6-baa4-f90a1e062382" path="/var/lib/kubelet/pods/094e10ac-a4d2-43c6-baa4-f90a1e062382/volumes" Jan 30 16:21:35 crc kubenswrapper[4740]: I0130 16:21:35.424808 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:21:35 crc kubenswrapper[4740]: W0130 16:21:35.430432 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1e6015_b5d3_49d4_b9f9_9813c700018d.slice/crio-dd47be54582f4582cf360e4fe2203f139db10d45a4e6d31244f55f1aae67daf1 WatchSource:0}: Error finding container dd47be54582f4582cf360e4fe2203f139db10d45a4e6d31244f55f1aae67daf1: Status 404 returned error can't find the container with id dd47be54582f4582cf360e4fe2203f139db10d45a4e6d31244f55f1aae67daf1 Jan 30 16:21:36 crc kubenswrapper[4740]: I0130 16:21:36.134290 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerStarted","Data":"dd47be54582f4582cf360e4fe2203f139db10d45a4e6d31244f55f1aae67daf1"} Jan 30 16:21:38 crc kubenswrapper[4740]: I0130 16:21:38.161697 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerStarted","Data":"96693fe2d20840d2da008c19461d3ca0f9b1bed65006eadcad2e4f79bba26db8"} Jan 30 16:21:40 crc kubenswrapper[4740]: I0130 16:21:40.183768 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerStarted","Data":"dc30609f96d4ca8b2e32c18e59b968a0f66ee27781655feb5b94f15cf72f4a6f"} Jan 30 16:21:42 crc kubenswrapper[4740]: I0130 16:21:42.210759 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerStarted","Data":"f98f741859ef2d302a54bf9b015161f7bf0b0abc159c63a255af52a0fec34f89"} Jan 30 16:21:42 crc kubenswrapper[4740]: I0130 16:21:42.560095 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:21:42 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:21:42 crc kubenswrapper[4740]: > Jan 30 16:21:42 crc kubenswrapper[4740]: I0130 16:21:42.560199 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:21:42 crc kubenswrapper[4740]: I0130 16:21:42.561235 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756"} pod="openshift-marketplace/redhat-operators-jz9qh" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 30 16:21:42 crc kubenswrapper[4740]: I0130 16:21:42.561290 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" containerID="cri-o://1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756" gracePeriod=30 Jan 30 16:21:45 crc kubenswrapper[4740]: I0130 16:21:45.249742 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerStarted","Data":"1ce2f3491e6c1626e859dbeacd16ac9abf0e661ceb9a8aed430ece41fc4f34f7"} Jan 30 16:21:45 crc kubenswrapper[4740]: I0130 16:21:45.250283 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:21:45 crc kubenswrapper[4740]: I0130 16:21:45.280098 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.21675527 podStartE2EDuration="11.280073058s" podCreationTimestamp="2026-01-30 16:21:34 +0000 UTC" firstStartedPulling="2026-01-30 16:21:35.437083396 +0000 UTC m=+1544.074145995" lastFinishedPulling="2026-01-30 16:21:44.500401184 +0000 UTC m=+1553.137463783" observedRunningTime="2026-01-30 16:21:45.275683419 +0000 UTC m=+1553.912746018" watchObservedRunningTime="2026-01-30 16:21:45.280073058 +0000 UTC m=+1553.917135657" Jan 30 16:21:54 crc kubenswrapper[4740]: I0130 16:21:54.357882 4740 generic.go:334] "Generic (PLEG): container finished" podID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerID="1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756" exitCode=0 Jan 30 16:21:54 crc kubenswrapper[4740]: I0130 16:21:54.358006 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerDied","Data":"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756"} Jan 30 16:21:54 crc kubenswrapper[4740]: I0130 16:21:54.454937 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:21:54 crc kubenswrapper[4740]: I0130 16:21:54.455002 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:21:56 crc kubenswrapper[4740]: I0130 16:21:56.388314 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerStarted","Data":"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e"} Jan 30 16:22:01 crc kubenswrapper[4740]: I0130 16:22:01.498955 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:01 crc kubenswrapper[4740]: I0130 16:22:01.500008 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:02 crc kubenswrapper[4740]: I0130 16:22:02.554914 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" probeResult="failure" output=< Jan 30 16:22:02 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:22:02 crc kubenswrapper[4740]: > Jan 30 16:22:04 crc kubenswrapper[4740]: I0130 16:22:04.878874 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.009068 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.014911 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" containerName="kube-state-metrics" containerID="cri-o://c335912e6c8a515bf6d7a0922c3bee7b7df4d7b7849a3545ac13ad96a0fcb55c" gracePeriod=30 Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.571066 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.580775 4740 generic.go:334] "Generic (PLEG): container finished" podID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" containerID="c335912e6c8a515bf6d7a0922c3bee7b7df4d7b7849a3545ac13ad96a0fcb55c" exitCode=2 Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.580839 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00","Type":"ContainerDied","Data":"c335912e6c8a515bf6d7a0922c3bee7b7df4d7b7849a3545ac13ad96a0fcb55c"} Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.580874 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00","Type":"ContainerDied","Data":"ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8"} Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.580885 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7c29baabffa9574bad258ff0e49852e08a6f671a48a9c274a211edd8f29ee8" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.654657 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.670252 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.802909 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8cc6\" (UniqueName: \"kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6\") pod \"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00\" (UID: \"a8ee026b-f6be-4d78-adf8-eaa7c77e1e00\") " Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.840923 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6" (OuterVolumeSpecName: "kube-api-access-n8cc6") pod "a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" (UID: "a8ee026b-f6be-4d78-adf8-eaa7c77e1e00"). InnerVolumeSpecName "kube-api-access-n8cc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:11 crc kubenswrapper[4740]: I0130 16:22:11.906874 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8cc6\" (UniqueName: \"kubernetes.io/projected/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00-kube-api-access-n8cc6\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.378051 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.592131 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.633045 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.647237 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.668923 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:12 crc kubenswrapper[4740]: E0130 16:22:12.669800 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" containerName="kube-state-metrics" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.669826 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" containerName="kube-state-metrics" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.670107 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" containerName="kube-state-metrics" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.671397 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.675458 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.675604 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.683269 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.836388 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.836684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.837003 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.837085 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-api-access-fc8xc\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.940317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.940503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.940606 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.940653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-api-access-fc8xc\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.946642 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.946885 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.951243 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:12 crc kubenswrapper[4740]: I0130 16:22:12.966250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/1c39bfe6-b89f-4699-95ff-e79c94b13740-kube-api-access-fc8xc\") pod \"kube-state-metrics-0\" (UID: \"1c39bfe6-b89f-4699-95ff-e79c94b13740\") " pod="openstack/kube-state-metrics-0" Jan 30 16:22:13 crc kubenswrapper[4740]: I0130 16:22:13.009649 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 16:22:13 crc kubenswrapper[4740]: I0130 16:22:13.401492 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ee026b-f6be-4d78-adf8-eaa7c77e1e00" path="/var/lib/kubelet/pods/a8ee026b-f6be-4d78-adf8-eaa7c77e1e00/volumes" Jan 30 16:22:13 crc kubenswrapper[4740]: I0130 16:22:13.605408 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jz9qh" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" containerID="cri-o://1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e" gracePeriod=2 Jan 30 16:22:13 crc kubenswrapper[4740]: I0130 16:22:13.640864 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.488869 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.589385 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content\") pod \"b939d225-58bf-4604-953d-8ed193ae6f0b\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.589563 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities\") pod \"b939d225-58bf-4604-953d-8ed193ae6f0b\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.589776 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2gg4\" (UniqueName: \"kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4\") pod \"b939d225-58bf-4604-953d-8ed193ae6f0b\" (UID: \"b939d225-58bf-4604-953d-8ed193ae6f0b\") " Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.594425 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4" (OuterVolumeSpecName: "kube-api-access-r2gg4") pod "b939d225-58bf-4604-953d-8ed193ae6f0b" (UID: "b939d225-58bf-4604-953d-8ed193ae6f0b"). InnerVolumeSpecName "kube-api-access-r2gg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.598815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities" (OuterVolumeSpecName: "utilities") pod "b939d225-58bf-4604-953d-8ed193ae6f0b" (UID: "b939d225-58bf-4604-953d-8ed193ae6f0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.619573 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c39bfe6-b89f-4699-95ff-e79c94b13740","Type":"ContainerStarted","Data":"dbd5e64619e857c6d8eb3df38f9f3f5e7344726ea7f98dbc715dcb09fc268e19"} Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.623221 4740 generic.go:334] "Generic (PLEG): container finished" podID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerID="1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e" exitCode=0 Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.623273 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerDied","Data":"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e"} Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.623309 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jz9qh" event={"ID":"b939d225-58bf-4604-953d-8ed193ae6f0b","Type":"ContainerDied","Data":"a516247ae58cdd279248126ea9d60ea880a54e7452c327c0ffb4a84bd4f2ca86"} Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.623329 4740 scope.go:117] "RemoveContainer" containerID="1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.623505 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jz9qh" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.666154 4740 scope.go:117] "RemoveContainer" containerID="1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.696243 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.696296 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2gg4\" (UniqueName: \"kubernetes.io/projected/b939d225-58bf-4604-953d-8ed193ae6f0b-kube-api-access-r2gg4\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.725033 4740 scope.go:117] "RemoveContainer" containerID="d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.760038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b939d225-58bf-4604-953d-8ed193ae6f0b" (UID: "b939d225-58bf-4604-953d-8ed193ae6f0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.768403 4740 scope.go:117] "RemoveContainer" containerID="fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.799662 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b939d225-58bf-4604-953d-8ed193ae6f0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.822619 4740 scope.go:117] "RemoveContainer" containerID="1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e" Jan 30 16:22:14 crc kubenswrapper[4740]: E0130 16:22:14.823210 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e\": container with ID starting with 1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e not found: ID does not exist" containerID="1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.823246 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e"} err="failed to get container status \"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e\": rpc error: code = NotFound desc = could not find container \"1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e\": container with ID starting with 1e25ed104f0e568a1bd5504c988a55c986717bae43dcb7d8f57ea6248e657a7e not found: ID does not exist" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.823274 4740 scope.go:117] "RemoveContainer" containerID="1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756" Jan 30 16:22:14 crc kubenswrapper[4740]: E0130 16:22:14.823715 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756\": container with ID starting with 1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756 not found: ID does not exist" containerID="1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.823776 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756"} err="failed to get container status \"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756\": rpc error: code = NotFound desc = could not find container \"1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756\": container with ID starting with 1b66e700efbf101327209f4f27896222fa0f6b183805285cc0fea6091350b756 not found: ID does not exist" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.823813 4740 scope.go:117] "RemoveContainer" containerID="d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173" Jan 30 16:22:14 crc kubenswrapper[4740]: E0130 16:22:14.824164 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173\": container with ID starting with d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173 not found: ID does not exist" containerID="d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.824192 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173"} err="failed to get container status \"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173\": rpc error: code = NotFound desc = could not find container \"d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173\": container with ID starting with d0a19738c00fb11f8e66c960ce31495a3d19988a6e849ecb42bcc1aa31f76173 not found: ID does not exist" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.824210 4740 scope.go:117] "RemoveContainer" containerID="fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af" Jan 30 16:22:14 crc kubenswrapper[4740]: E0130 16:22:14.824728 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af\": container with ID starting with fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af not found: ID does not exist" containerID="fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.824756 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af"} err="failed to get container status \"fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af\": rpc error: code = NotFound desc = could not find container \"fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af\": container with ID starting with fa6be14a156a4f7df34ccb9e5360bf211af7cf6c5fb71228741f519e56f8b6af not found: ID does not exist" Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.964849 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.966708 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-central-agent" containerID="cri-o://96693fe2d20840d2da008c19461d3ca0f9b1bed65006eadcad2e4f79bba26db8" gracePeriod=30 Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.967250 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="proxy-httpd" containerID="cri-o://1ce2f3491e6c1626e859dbeacd16ac9abf0e661ceb9a8aed430ece41fc4f34f7" gracePeriod=30 Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.967554 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-notification-agent" containerID="cri-o://dc30609f96d4ca8b2e32c18e59b968a0f66ee27781655feb5b94f15cf72f4a6f" gracePeriod=30 Jan 30 16:22:14 crc kubenswrapper[4740]: I0130 16:22:14.967632 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="sg-core" containerID="cri-o://f98f741859ef2d302a54bf9b015161f7bf0b0abc159c63a255af52a0fec34f89" gracePeriod=30 Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.034270 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.052008 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jz9qh"] Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.353256 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" path="/var/lib/kubelet/pods/b939d225-58bf-4604-953d-8ed193ae6f0b/volumes" Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.638495 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c39bfe6-b89f-4699-95ff-e79c94b13740","Type":"ContainerStarted","Data":"11317ee2aab56f5b771e75d819fd25319fc5b809967fe8a47b671ca80aebd630"} Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.639107 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.642926 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerID="1ce2f3491e6c1626e859dbeacd16ac9abf0e661ceb9a8aed430ece41fc4f34f7" exitCode=0 Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.642959 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerID="f98f741859ef2d302a54bf9b015161f7bf0b0abc159c63a255af52a0fec34f89" exitCode=2 Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.642967 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerID="96693fe2d20840d2da008c19461d3ca0f9b1bed65006eadcad2e4f79bba26db8" exitCode=0 Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.643018 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerDied","Data":"1ce2f3491e6c1626e859dbeacd16ac9abf0e661ceb9a8aed430ece41fc4f34f7"} Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.643098 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerDied","Data":"f98f741859ef2d302a54bf9b015161f7bf0b0abc159c63a255af52a0fec34f89"} Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.643112 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerDied","Data":"96693fe2d20840d2da008c19461d3ca0f9b1bed65006eadcad2e4f79bba26db8"} Jan 30 16:22:15 crc kubenswrapper[4740]: I0130 16:22:15.660393 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.821515539 podStartE2EDuration="3.660346189s" podCreationTimestamp="2026-01-30 16:22:12 +0000 UTC" firstStartedPulling="2026-01-30 16:22:13.641118018 +0000 UTC m=+1582.278180617" lastFinishedPulling="2026-01-30 16:22:14.479948668 +0000 UTC m=+1583.117011267" observedRunningTime="2026-01-30 16:22:15.656908404 +0000 UTC m=+1584.293971013" watchObservedRunningTime="2026-01-30 16:22:15.660346189 +0000 UTC m=+1584.297408788" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.366202 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:21 crc kubenswrapper[4740]: E0130 16:22:21.370389 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="extract-utilities" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.370531 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="extract-utilities" Jan 30 16:22:21 crc kubenswrapper[4740]: E0130 16:22:21.370554 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="extract-content" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.370562 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="extract-content" Jan 30 16:22:21 crc kubenswrapper[4740]: E0130 16:22:21.370776 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.370792 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: E0130 16:22:21.370807 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.370814 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.371550 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.372766 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b939d225-58bf-4604-953d-8ed193ae6f0b" containerName="registry-server" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.374080 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.374282 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.478455 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2gv\" (UniqueName: \"kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.478540 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.478644 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.581967 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.582167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz2gv\" (UniqueName: \"kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.582232 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.583007 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.583084 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.607175 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz2gv\" (UniqueName: \"kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv\") pod \"certified-operators-5mlnw\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:21 crc kubenswrapper[4740]: I0130 16:22:21.712485 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:22 crc kubenswrapper[4740]: I0130 16:22:22.321756 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:22 crc kubenswrapper[4740]: I0130 16:22:22.726451 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerStarted","Data":"b98d9be0b01e0cebfb4d19f7a95b81727b60b0d27dbf4a23816916bfc262a21b"} Jan 30 16:22:23 crc kubenswrapper[4740]: I0130 16:22:23.078539 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 16:22:23 crc kubenswrapper[4740]: I0130 16:22:23.740493 4740 generic.go:334] "Generic (PLEG): container finished" podID="31233c97-70d8-47f3-92cb-16794b5680cb" containerID="c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472" exitCode=0 Jan 30 16:22:23 crc kubenswrapper[4740]: I0130 16:22:23.740607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerDied","Data":"c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472"} Jan 30 16:22:23 crc kubenswrapper[4740]: I0130 16:22:23.744601 4740 generic.go:334] "Generic (PLEG): container finished" podID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerID="dc30609f96d4ca8b2e32c18e59b968a0f66ee27781655feb5b94f15cf72f4a6f" exitCode=0 Jan 30 16:22:23 crc kubenswrapper[4740]: I0130 16:22:23.744648 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerDied","Data":"dc30609f96d4ca8b2e32c18e59b968a0f66ee27781655feb5b94f15cf72f4a6f"} Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.193193 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.300781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.300872 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301003 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301032 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301279 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqtxj\" (UniqueName: \"kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301427 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml\") pod \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\" (UID: \"4e1e6015-b5d3-49d4-b9f9-9813c700018d\") " Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.301487 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.302699 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.310764 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts" (OuterVolumeSpecName: "scripts") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.316891 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj" (OuterVolumeSpecName: "kube-api-access-lqtxj") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "kube-api-access-lqtxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.346542 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.391381 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.405919 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.405957 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.405971 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.405985 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e1e6015-b5d3-49d4-b9f9-9813c700018d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.406000 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqtxj\" (UniqueName: \"kubernetes.io/projected/4e1e6015-b5d3-49d4-b9f9-9813c700018d-kube-api-access-lqtxj\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.406016 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.424794 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data" (OuterVolumeSpecName: "config-data") pod "4e1e6015-b5d3-49d4-b9f9-9813c700018d" (UID: "4e1e6015-b5d3-49d4-b9f9-9813c700018d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.454835 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.454911 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.508768 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1e6015-b5d3-49d4-b9f9-9813c700018d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.767113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e1e6015-b5d3-49d4-b9f9-9813c700018d","Type":"ContainerDied","Data":"dd47be54582f4582cf360e4fe2203f139db10d45a4e6d31244f55f1aae67daf1"} Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.767673 4740 scope.go:117] "RemoveContainer" containerID="1ce2f3491e6c1626e859dbeacd16ac9abf0e661ceb9a8aed430ece41fc4f34f7" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.767606 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.812996 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.823449 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.852110 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:24 crc kubenswrapper[4740]: E0130 16:22:24.852778 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="sg-core" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.852802 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="sg-core" Jan 30 16:22:24 crc kubenswrapper[4740]: E0130 16:22:24.852834 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="proxy-httpd" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.852842 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="proxy-httpd" Jan 30 16:22:24 crc kubenswrapper[4740]: E0130 16:22:24.852859 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-central-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.852869 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-central-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: E0130 16:22:24.852897 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-notification-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.852905 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-notification-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.853160 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-notification-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.853174 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="proxy-httpd" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.853187 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="ceilometer-central-agent" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.853219 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" containerName="sg-core" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.856643 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.861306 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.861510 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.863197 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.873980 4740 scope.go:117] "RemoveContainer" containerID="f98f741859ef2d302a54bf9b015161f7bf0b0abc159c63a255af52a0fec34f89" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.875205 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.916973 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917117 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917195 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917237 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917419 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917561 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxfkg\" (UniqueName: \"kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:24 crc kubenswrapper[4740]: I0130 16:22:24.917619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019736 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxfkg\" (UniqueName: \"kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019806 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019861 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019891 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019928 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019954 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.019991 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.020039 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.020489 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.020593 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.026074 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.026269 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.026441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.027281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.027663 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.037714 4740 scope.go:117] "RemoveContainer" containerID="dc30609f96d4ca8b2e32c18e59b968a0f66ee27781655feb5b94f15cf72f4a6f" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.042431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxfkg\" (UniqueName: \"kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg\") pod \"ceilometer-0\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.098365 4740 scope.go:117] "RemoveContainer" containerID="96693fe2d20840d2da008c19461d3ca0f9b1bed65006eadcad2e4f79bba26db8" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.240126 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.350917 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1e6015-b5d3-49d4-b9f9-9813c700018d" path="/var/lib/kubelet/pods/4e1e6015-b5d3-49d4-b9f9-9813c700018d/volumes" Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.782110 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerStarted","Data":"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d"} Jan 30 16:22:25 crc kubenswrapper[4740]: I0130 16:22:25.925814 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:22:26 crc kubenswrapper[4740]: I0130 16:22:26.796025 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerStarted","Data":"0a1d1ac6f01db825c6f5adb5d98e9a899900a7418ef3c1366e282ca335bb6109"} Jan 30 16:22:27 crc kubenswrapper[4740]: I0130 16:22:27.809272 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerStarted","Data":"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4"} Jan 30 16:22:29 crc kubenswrapper[4740]: I0130 16:22:29.842912 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerStarted","Data":"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2"} Jan 30 16:22:29 crc kubenswrapper[4740]: I0130 16:22:29.846371 4740 generic.go:334] "Generic (PLEG): container finished" podID="31233c97-70d8-47f3-92cb-16794b5680cb" containerID="a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d" exitCode=0 Jan 30 16:22:29 crc kubenswrapper[4740]: I0130 16:22:29.846407 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerDied","Data":"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d"} Jan 30 16:22:30 crc kubenswrapper[4740]: I0130 16:22:30.863177 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerStarted","Data":"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297"} Jan 30 16:22:31 crc kubenswrapper[4740]: I0130 16:22:31.888370 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerStarted","Data":"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf"} Jan 30 16:22:31 crc kubenswrapper[4740]: I0130 16:22:31.913661 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5mlnw" podStartSLOduration=4.088943164 podStartE2EDuration="10.913637999s" podCreationTimestamp="2026-01-30 16:22:21 +0000 UTC" firstStartedPulling="2026-01-30 16:22:23.744411113 +0000 UTC m=+1592.381473712" lastFinishedPulling="2026-01-30 16:22:30.569105948 +0000 UTC m=+1599.206168547" observedRunningTime="2026-01-30 16:22:31.90843577 +0000 UTC m=+1600.545498369" watchObservedRunningTime="2026-01-30 16:22:31.913637999 +0000 UTC m=+1600.550700598" Jan 30 16:22:34 crc kubenswrapper[4740]: I0130 16:22:34.928527 4740 generic.go:334] "Generic (PLEG): container finished" podID="7b5a28f7-a3dd-4812-af52-97f58641116a" containerID="79c79f0e0908c57dfcdcea35437ea56783f95ff00ddd358c149b22ded4d10267" exitCode=0 Jan 30 16:22:34 crc kubenswrapper[4740]: I0130 16:22:34.928613 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kktg4" event={"ID":"7b5a28f7-a3dd-4812-af52-97f58641116a","Type":"ContainerDied","Data":"79c79f0e0908c57dfcdcea35437ea56783f95ff00ddd358c149b22ded4d10267"} Jan 30 16:22:34 crc kubenswrapper[4740]: I0130 16:22:34.936651 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerStarted","Data":"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2"} Jan 30 16:22:34 crc kubenswrapper[4740]: I0130 16:22:34.936862 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:22:34 crc kubenswrapper[4740]: I0130 16:22:34.979280 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.733103185 podStartE2EDuration="10.979248128s" podCreationTimestamp="2026-01-30 16:22:24 +0000 UTC" firstStartedPulling="2026-01-30 16:22:25.917994962 +0000 UTC m=+1594.555057551" lastFinishedPulling="2026-01-30 16:22:34.164139895 +0000 UTC m=+1602.801202494" observedRunningTime="2026-01-30 16:22:34.977755261 +0000 UTC m=+1603.614817860" watchObservedRunningTime="2026-01-30 16:22:34.979248128 +0000 UTC m=+1603.616310727" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.500276 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.644919 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sl9p\" (UniqueName: \"kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p\") pod \"7b5a28f7-a3dd-4812-af52-97f58641116a\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.644970 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle\") pod \"7b5a28f7-a3dd-4812-af52-97f58641116a\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.645072 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts\") pod \"7b5a28f7-a3dd-4812-af52-97f58641116a\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.645203 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data\") pod \"7b5a28f7-a3dd-4812-af52-97f58641116a\" (UID: \"7b5a28f7-a3dd-4812-af52-97f58641116a\") " Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.661099 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts" (OuterVolumeSpecName: "scripts") pod "7b5a28f7-a3dd-4812-af52-97f58641116a" (UID: "7b5a28f7-a3dd-4812-af52-97f58641116a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.661569 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p" (OuterVolumeSpecName: "kube-api-access-5sl9p") pod "7b5a28f7-a3dd-4812-af52-97f58641116a" (UID: "7b5a28f7-a3dd-4812-af52-97f58641116a"). InnerVolumeSpecName "kube-api-access-5sl9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.682411 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data" (OuterVolumeSpecName: "config-data") pod "7b5a28f7-a3dd-4812-af52-97f58641116a" (UID: "7b5a28f7-a3dd-4812-af52-97f58641116a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.749098 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.749658 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sl9p\" (UniqueName: \"kubernetes.io/projected/7b5a28f7-a3dd-4812-af52-97f58641116a-kube-api-access-5sl9p\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.749671 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.752783 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b5a28f7-a3dd-4812-af52-97f58641116a" (UID: "7b5a28f7-a3dd-4812-af52-97f58641116a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.851764 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b5a28f7-a3dd-4812-af52-97f58641116a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.969654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kktg4" event={"ID":"7b5a28f7-a3dd-4812-af52-97f58641116a","Type":"ContainerDied","Data":"a218bec101565a9eb83d2754beb5818982fa6b2a6c4fc399948759fb6a2d16a1"} Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.969720 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a218bec101565a9eb83d2754beb5818982fa6b2a6c4fc399948759fb6a2d16a1" Jan 30 16:22:36 crc kubenswrapper[4740]: I0130 16:22:36.969807 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kktg4" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.153838 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:22:37 crc kubenswrapper[4740]: E0130 16:22:37.155033 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" containerName="nova-cell0-conductor-db-sync" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.155133 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" containerName="nova-cell0-conductor-db-sync" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.155590 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" containerName="nova-cell0-conductor-db-sync" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.156866 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.162963 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ks2ch" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.163392 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.188165 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.268222 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjz2\" (UniqueName: \"kubernetes.io/projected/dfe42dad-1fc3-4802-8d95-2e764a6c2750-kube-api-access-dtjz2\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.268757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.269063 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.371372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.371567 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.371665 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjz2\" (UniqueName: \"kubernetes.io/projected/dfe42dad-1fc3-4802-8d95-2e764a6c2750-kube-api-access-dtjz2\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.377556 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.378320 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfe42dad-1fc3-4802-8d95-2e764a6c2750-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.432162 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjz2\" (UniqueName: \"kubernetes.io/projected/dfe42dad-1fc3-4802-8d95-2e764a6c2750-kube-api-access-dtjz2\") pod \"nova-cell0-conductor-0\" (UID: \"dfe42dad-1fc3-4802-8d95-2e764a6c2750\") " pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:37 crc kubenswrapper[4740]: I0130 16:22:37.513161 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:38 crc kubenswrapper[4740]: I0130 16:22:38.019990 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 16:22:38 crc kubenswrapper[4740]: I0130 16:22:38.997735 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dfe42dad-1fc3-4802-8d95-2e764a6c2750","Type":"ContainerStarted","Data":"96de2d467575dec585f184aa63da896da27cf8dcbb919daf3756c9aef48a1526"} Jan 30 16:22:38 crc kubenswrapper[4740]: I0130 16:22:38.998181 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dfe42dad-1fc3-4802-8d95-2e764a6c2750","Type":"ContainerStarted","Data":"3b2015eddcc2c73be0135c63c04b05a2d7288732825bf0255835fc947e9e1b61"} Jan 30 16:22:38 crc kubenswrapper[4740]: I0130 16:22:38.999798 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:39 crc kubenswrapper[4740]: I0130 16:22:39.033704 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.033672708 podStartE2EDuration="2.033672708s" podCreationTimestamp="2026-01-30 16:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:22:39.02529541 +0000 UTC m=+1607.662358039" watchObservedRunningTime="2026-01-30 16:22:39.033672708 +0000 UTC m=+1607.670735307" Jan 30 16:22:41 crc kubenswrapper[4740]: I0130 16:22:41.712724 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:41 crc kubenswrapper[4740]: I0130 16:22:41.713468 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:41 crc kubenswrapper[4740]: I0130 16:22:41.783310 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:42 crc kubenswrapper[4740]: I0130 16:22:42.115426 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:42 crc kubenswrapper[4740]: I0130 16:22:42.415293 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.074539 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5mlnw" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="registry-server" containerID="cri-o://e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf" gracePeriod=2 Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.685686 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.792055 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content\") pod \"31233c97-70d8-47f3-92cb-16794b5680cb\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.792165 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities\") pod \"31233c97-70d8-47f3-92cb-16794b5680cb\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.792493 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz2gv\" (UniqueName: \"kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv\") pod \"31233c97-70d8-47f3-92cb-16794b5680cb\" (UID: \"31233c97-70d8-47f3-92cb-16794b5680cb\") " Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.795226 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities" (OuterVolumeSpecName: "utilities") pod "31233c97-70d8-47f3-92cb-16794b5680cb" (UID: "31233c97-70d8-47f3-92cb-16794b5680cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.814899 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv" (OuterVolumeSpecName: "kube-api-access-dz2gv") pod "31233c97-70d8-47f3-92cb-16794b5680cb" (UID: "31233c97-70d8-47f3-92cb-16794b5680cb"). InnerVolumeSpecName "kube-api-access-dz2gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.849072 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31233c97-70d8-47f3-92cb-16794b5680cb" (UID: "31233c97-70d8-47f3-92cb-16794b5680cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.895451 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz2gv\" (UniqueName: \"kubernetes.io/projected/31233c97-70d8-47f3-92cb-16794b5680cb-kube-api-access-dz2gv\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.895488 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:44 crc kubenswrapper[4740]: I0130 16:22:44.895499 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31233c97-70d8-47f3-92cb-16794b5680cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.091694 4740 generic.go:334] "Generic (PLEG): container finished" podID="31233c97-70d8-47f3-92cb-16794b5680cb" containerID="e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf" exitCode=0 Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.091760 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerDied","Data":"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf"} Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.091797 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mlnw" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.091832 4740 scope.go:117] "RemoveContainer" containerID="e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.091812 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mlnw" event={"ID":"31233c97-70d8-47f3-92cb-16794b5680cb","Type":"ContainerDied","Data":"b98d9be0b01e0cebfb4d19f7a95b81727b60b0d27dbf4a23816916bfc262a21b"} Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.121721 4740 scope.go:117] "RemoveContainer" containerID="a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.135206 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.145438 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5mlnw"] Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.156682 4740 scope.go:117] "RemoveContainer" containerID="c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.204094 4740 scope.go:117] "RemoveContainer" containerID="e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf" Jan 30 16:22:45 crc kubenswrapper[4740]: E0130 16:22:45.207445 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf\": container with ID starting with e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf not found: ID does not exist" containerID="e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.207493 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf"} err="failed to get container status \"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf\": rpc error: code = NotFound desc = could not find container \"e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf\": container with ID starting with e193e7d0088889c34884b1d568312ce15d8b122fb86e2bd310db32de720c05bf not found: ID does not exist" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.207528 4740 scope.go:117] "RemoveContainer" containerID="a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d" Jan 30 16:22:45 crc kubenswrapper[4740]: E0130 16:22:45.208137 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d\": container with ID starting with a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d not found: ID does not exist" containerID="a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.208196 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d"} err="failed to get container status \"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d\": rpc error: code = NotFound desc = could not find container \"a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d\": container with ID starting with a30aa35df1f72c34e31e79001e80c4b470b3ebf583decd4016477fb01dc52b2d not found: ID does not exist" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.208235 4740 scope.go:117] "RemoveContainer" containerID="c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472" Jan 30 16:22:45 crc kubenswrapper[4740]: E0130 16:22:45.209114 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472\": container with ID starting with c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472 not found: ID does not exist" containerID="c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.209150 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472"} err="failed to get container status \"c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472\": rpc error: code = NotFound desc = could not find container \"c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472\": container with ID starting with c7782fc1e84f5db81f59ae36fcf5a7bebbcb22014a455cad5743f7114a43d472 not found: ID does not exist" Jan 30 16:22:45 crc kubenswrapper[4740]: I0130 16:22:45.381228 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" path="/var/lib/kubelet/pods/31233c97-70d8-47f3-92cb-16794b5680cb/volumes" Jan 30 16:22:47 crc kubenswrapper[4740]: I0130 16:22:47.545568 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.061883 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-98dhw"] Jan 30 16:22:48 crc kubenswrapper[4740]: E0130 16:22:48.062545 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="extract-content" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.062571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="extract-content" Jan 30 16:22:48 crc kubenswrapper[4740]: E0130 16:22:48.062610 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="extract-utilities" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.062619 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="extract-utilities" Jan 30 16:22:48 crc kubenswrapper[4740]: E0130 16:22:48.062643 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="registry-server" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.062657 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="registry-server" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.062905 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="31233c97-70d8-47f3-92cb-16794b5680cb" containerName="registry-server" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.063856 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.070661 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.074382 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.082216 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-98dhw"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.176634 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456cd\" (UniqueName: \"kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.176753 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.176892 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.176960 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.280057 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.280126 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.280252 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456cd\" (UniqueName: \"kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.280301 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.286095 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.290832 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.296743 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.300219 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.314110 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.315610 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.321778 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.322105 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456cd\" (UniqueName: \"kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd\") pod \"nova-cell0-cell-mapping-98dhw\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.390037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.390228 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.390369 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lsv9\" (UniqueName: \"kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.390592 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.390867 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.439453 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.441247 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.444786 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.475456 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.478209 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.486012 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.509178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lsv9\" (UniqueName: \"kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.509495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.509585 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.509786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.516176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.532656 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.537444 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.552312 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.553815 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lsv9\" (UniqueName: \"kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9\") pod \"nova-api-0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.559898 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667582 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667691 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667761 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4lx\" (UniqueName: \"kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667807 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvvx8\" (UniqueName: \"kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667908 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.667985 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.734031 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.778549 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.781520 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.793968 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvvx8\" (UniqueName: \"kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.794661 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.795016 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.795440 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.795535 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.796208 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.796378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4lx\" (UniqueName: \"kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.797183 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.808962 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.811800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.841584 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.841932 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.842532 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.855120 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.877001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4lx\" (UniqueName: \"kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx\") pod \"nova-metadata-0\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " pod="openstack/nova-metadata-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.878382 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.880837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvvx8\" (UniqueName: \"kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.893504 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.899011 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.904271 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqhp\" (UniqueName: \"kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.907604 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:48 crc kubenswrapper[4740]: I0130 16:22:48.909984 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.007103 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.009700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.009748 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.009800 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqhp\" (UniqueName: \"kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.009860 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.010968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.011057 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.011114 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.011145 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.011173 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25ct\" (UniqueName: \"kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.021097 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.034521 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.041698 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqhp\" (UniqueName: \"kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp\") pod \"nova-scheduler-0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.059462 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114003 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114178 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114237 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.114308 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25ct\" (UniqueName: \"kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.116736 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.117145 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.122190 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.124926 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.129509 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.145446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25ct\" (UniqueName: \"kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct\") pod \"dnsmasq-dns-884c8b8f5-4r9vp\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.168602 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.262544 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.324707 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-98dhw"] Jan 30 16:22:49 crc kubenswrapper[4740]: W0130 16:22:49.449193 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc239553d_1e26_46d3_9487_17a11ad18ad9.slice/crio-0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be WatchSource:0}: Error finding container 0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be: Status 404 returned error can't find the container with id 0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.680794 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.964313 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-88k28"] Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.966579 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.971123 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.971277 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 16:22:49 crc kubenswrapper[4740]: I0130 16:22:49.986935 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-88k28"] Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.057037 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5jn6\" (UniqueName: \"kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.057091 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.057144 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.057188 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.124234 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.161232 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5jn6\" (UniqueName: \"kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.161323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.161434 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.161506 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.176241 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.177334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.184060 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.188059 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5jn6\" (UniqueName: \"kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6\") pod \"nova-cell1-conductor-db-sync-88k28\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.221261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerStarted","Data":"13595dde6ebabec458a0fa8ea3636e849164bea079cd7226f1064b6a032638e2"} Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.231580 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-98dhw" event={"ID":"c239553d-1e26-46d3-9487-17a11ad18ad9","Type":"ContainerStarted","Data":"2f5ce6e264a40d5796fae903ead052c7cdfa8bde92eeeff8fabd0dcc83223b14"} Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.231638 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-98dhw" event={"ID":"c239553d-1e26-46d3-9487-17a11ad18ad9","Type":"ContainerStarted","Data":"0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be"} Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.239327 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b543a20-70dd-45ec-8141-4bff06d6d0ce","Type":"ContainerStarted","Data":"21e49f1e4617596db3e4865792a36b10ed3a53c183c27fdfa0abdc4c07efa159"} Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.306434 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.325027 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-98dhw" podStartSLOduration=2.324997553 podStartE2EDuration="2.324997553s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:22:50.261447737 +0000 UTC m=+1618.898510336" watchObservedRunningTime="2026-01-30 16:22:50.324997553 +0000 UTC m=+1618.962060162" Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.328262 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.433451 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.618736 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:22:50 crc kubenswrapper[4740]: I0130 16:22:50.987439 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-88k28"] Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.318449 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c0199afa-1963-4ece-bcbc-272b76d500a0","Type":"ContainerStarted","Data":"ff3bef220291b9fc1bf77199bcf1c2adc53a86f6b071976447aacc8759fdcead"} Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.337818 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-88k28" event={"ID":"df10ecb6-43ad-404c-b51c-b64913bab019","Type":"ContainerStarted","Data":"cd7e7ebb7e18fc2c122df30948b863b61e0ef7169d07a36b39e16465b304a7f3"} Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.357922 4740 generic.go:334] "Generic (PLEG): container finished" podID="c324fe3d-c078-427f-9c52-b99f1008f395" containerID="402635d37ade2058deb5d4897d4475adbcf6c85020548db4f63684e4130cd355" exitCode=0 Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.383101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" event={"ID":"c324fe3d-c078-427f-9c52-b99f1008f395","Type":"ContainerDied","Data":"402635d37ade2058deb5d4897d4475adbcf6c85020548db4f63684e4130cd355"} Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.383160 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" event={"ID":"c324fe3d-c078-427f-9c52-b99f1008f395","Type":"ContainerStarted","Data":"5f2cbd5e6ba8d75d633b6d3d421890d2156772de2bafb95576cd67baebf0ad7a"} Jan 30 16:22:51 crc kubenswrapper[4740]: I0130 16:22:51.383173 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerStarted","Data":"1540cbedc5935b29d73732c3df4316c7d91eeb1d69d4a95f467ac05f35ed58c4"} Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.434051 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-88k28" event={"ID":"df10ecb6-43ad-404c-b51c-b64913bab019","Type":"ContainerStarted","Data":"e5569e41650d8cc725597ad721a090481c7b7f95dafbc9916f0d23819cb3cabc"} Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.446401 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" event={"ID":"c324fe3d-c078-427f-9c52-b99f1008f395","Type":"ContainerStarted","Data":"0a389692e8eb7f0fd8c33bcc7f0b80044c3c11f5d34eca4fb5aa12a52f6deb3e"} Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.446958 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.465457 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-88k28" podStartSLOduration=3.465426857 podStartE2EDuration="3.465426857s" podCreationTimestamp="2026-01-30 16:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:22:52.462092564 +0000 UTC m=+1621.099155163" watchObservedRunningTime="2026-01-30 16:22:52.465426857 +0000 UTC m=+1621.102489456" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.494231 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" podStartSLOduration=4.494205332 podStartE2EDuration="4.494205332s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:22:52.492098119 +0000 UTC m=+1621.129160708" watchObservedRunningTime="2026-01-30 16:22:52.494205332 +0000 UTC m=+1621.131267931" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.555532 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.562264 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.585830 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.756474 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.756601 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrg9\" (UniqueName: \"kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.756664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.859185 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.859299 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrg9\" (UniqueName: \"kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.859380 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.860032 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.860281 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.884025 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrg9\" (UniqueName: \"kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9\") pod \"redhat-marketplace-g2cj6\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:52 crc kubenswrapper[4740]: I0130 16:22:52.911097 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:22:53 crc kubenswrapper[4740]: I0130 16:22:53.166493 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:22:53 crc kubenswrapper[4740]: I0130 16:22:53.179191 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:22:54 crc kubenswrapper[4740]: I0130 16:22:54.454880 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:22:54 crc kubenswrapper[4740]: I0130 16:22:54.455319 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:22:54 crc kubenswrapper[4740]: I0130 16:22:54.455398 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:22:54 crc kubenswrapper[4740]: I0130 16:22:54.456534 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:22:54 crc kubenswrapper[4740]: I0130 16:22:54.456599 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" gracePeriod=600 Jan 30 16:22:55 crc kubenswrapper[4740]: I0130 16:22:55.255953 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:22:55 crc kubenswrapper[4740]: I0130 16:22:55.521533 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" exitCode=0 Jan 30 16:22:55 crc kubenswrapper[4740]: I0130 16:22:55.522019 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97"} Jan 30 16:22:55 crc kubenswrapper[4740]: I0130 16:22:55.522065 4740 scope.go:117] "RemoveContainer" containerID="8a0922e4de366e57138167824b08934e73cd7659f84fae5490627ddb260dd599" Jan 30 16:22:55 crc kubenswrapper[4740]: E0130 16:22:55.769096 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.493076 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.562829 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b543a20-70dd-45ec-8141-4bff06d6d0ce","Type":"ContainerStarted","Data":"5fdfbd1770f95f346a0ab41f9ff9e297c076907cc69b48736e5aee5e70f2dbc6"} Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.563228 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5fdfbd1770f95f346a0ab41f9ff9e297c076907cc69b48736e5aee5e70f2dbc6" gracePeriod=30 Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.592704 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerStarted","Data":"76cd319389155d9b794b166f7f4b73f3811bbb96fb657a118ecfc949ccef974e"} Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.610979 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.840067623 podStartE2EDuration="8.610950807s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="2026-01-30 16:22:50.125135527 +0000 UTC m=+1618.762198126" lastFinishedPulling="2026-01-30 16:22:55.896018711 +0000 UTC m=+1624.533081310" observedRunningTime="2026-01-30 16:22:56.588307665 +0000 UTC m=+1625.225370274" watchObservedRunningTime="2026-01-30 16:22:56.610950807 +0000 UTC m=+1625.248013406" Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.617468 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerStarted","Data":"68720c8de9af007c6d359430808d9a91d8d701fd3ace3dbfa563c40b74a32e33"} Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.617741 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-log" containerID="cri-o://68720c8de9af007c6d359430808d9a91d8d701fd3ace3dbfa563c40b74a32e33" gracePeriod=30 Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.618502 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-metadata" containerID="cri-o://1422d4b1236292f82fa2b4ffad401e08095e4e4483b2c3bd356ee4cd68a30326" gracePeriod=30 Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.626732 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:22:56 crc kubenswrapper[4740]: E0130 16:22:56.627041 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.643323 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerStarted","Data":"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b"} Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.645702 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.123893353 podStartE2EDuration="8.64567998s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="2026-01-30 16:22:50.375115626 +0000 UTC m=+1619.012178225" lastFinishedPulling="2026-01-30 16:22:55.896902253 +0000 UTC m=+1624.533964852" observedRunningTime="2026-01-30 16:22:56.643283481 +0000 UTC m=+1625.280346070" watchObservedRunningTime="2026-01-30 16:22:56.64567998 +0000 UTC m=+1625.282742579" Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.653200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c0199afa-1963-4ece-bcbc-272b76d500a0","Type":"ContainerStarted","Data":"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a"} Jan 30 16:22:56 crc kubenswrapper[4740]: I0130 16:22:56.730960 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.33326775 podStartE2EDuration="8.730787414s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="2026-01-30 16:22:50.474925611 +0000 UTC m=+1619.111988210" lastFinishedPulling="2026-01-30 16:22:55.872445275 +0000 UTC m=+1624.509507874" observedRunningTime="2026-01-30 16:22:56.696844601 +0000 UTC m=+1625.333907200" watchObservedRunningTime="2026-01-30 16:22:56.730787414 +0000 UTC m=+1625.367850013" Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.666948 4740 generic.go:334] "Generic (PLEG): container finished" podID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerID="68720c8de9af007c6d359430808d9a91d8d701fd3ace3dbfa563c40b74a32e33" exitCode=143 Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.667052 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerDied","Data":"68720c8de9af007c6d359430808d9a91d8d701fd3ace3dbfa563c40b74a32e33"} Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.667512 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerStarted","Data":"1422d4b1236292f82fa2b4ffad401e08095e4e4483b2c3bd356ee4cd68a30326"} Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.669944 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerStarted","Data":"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda"} Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.672705 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerID="8fa7fb5a3fb61465f2a6d910d2a714b5bfd56cfb6e635e68f25a2ebe59262438" exitCode=0 Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.673608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerDied","Data":"8fa7fb5a3fb61465f2a6d910d2a714b5bfd56cfb6e635e68f25a2ebe59262438"} Jan 30 16:22:57 crc kubenswrapper[4740]: I0130 16:22:57.699015 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.461710221 podStartE2EDuration="9.6989837s" podCreationTimestamp="2026-01-30 16:22:48 +0000 UTC" firstStartedPulling="2026-01-30 16:22:49.661599443 +0000 UTC m=+1618.298662042" lastFinishedPulling="2026-01-30 16:22:55.898872922 +0000 UTC m=+1624.535935521" observedRunningTime="2026-01-30 16:22:57.690426247 +0000 UTC m=+1626.327488856" watchObservedRunningTime="2026-01-30 16:22:57.6989837 +0000 UTC m=+1626.336046299" Jan 30 16:22:58 crc kubenswrapper[4740]: I0130 16:22:58.686819 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerStarted","Data":"f1a0b8c784a67e014fb9dd3325ba792ff28a60583dc9c203a583535461e8d5de"} Jan 30 16:22:58 crc kubenswrapper[4740]: I0130 16:22:58.730098 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:22:58 crc kubenswrapper[4740]: I0130 16:22:58.730196 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.008383 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.060305 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.060469 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.169509 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.170052 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.223123 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.264515 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.388557 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.388867 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="dnsmasq-dns" containerID="cri-o://7c0f9a5e1fc77cf1e86ea6785c5d2a1f2d86c1e961b577acd5d492309921f8ca" gracePeriod=10 Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.565802 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.729743 4740 generic.go:334] "Generic (PLEG): container finished" podID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerID="7c0f9a5e1fc77cf1e86ea6785c5d2a1f2d86c1e961b577acd5d492309921f8ca" exitCode=0 Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.731838 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" event={"ID":"eca56045-0ade-4faf-b0a2-17a4702c1fd8","Type":"ContainerDied","Data":"7c0f9a5e1fc77cf1e86ea6785c5d2a1f2d86c1e961b577acd5d492309921f8ca"} Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.815612 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.815757 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:22:59 crc kubenswrapper[4740]: I0130 16:22:59.830069 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.372880 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548243 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p45fh\" (UniqueName: \"kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548463 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548542 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548590 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548738 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.548850 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc\") pod \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\" (UID: \"eca56045-0ade-4faf-b0a2-17a4702c1fd8\") " Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.564595 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh" (OuterVolumeSpecName: "kube-api-access-p45fh") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "kube-api-access-p45fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.653082 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p45fh\" (UniqueName: \"kubernetes.io/projected/eca56045-0ade-4faf-b0a2-17a4702c1fd8-kube-api-access-p45fh\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.680685 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.686664 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.711216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.756922 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config" (OuterVolumeSpecName: "config") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.758152 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eca56045-0ade-4faf-b0a2-17a4702c1fd8" (UID: "eca56045-0ade-4faf-b0a2-17a4702c1fd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.758340 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.758562 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-qglcl" event={"ID":"eca56045-0ade-4faf-b0a2-17a4702c1fd8","Type":"ContainerDied","Data":"4381fcd1746ba81790814fae31b7d453ab06593e7f4ea4d2f4bddf0dcebe8c8a"} Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.758611 4740 scope.go:117] "RemoveContainer" containerID="7c0f9a5e1fc77cf1e86ea6785c5d2a1f2d86c1e961b577acd5d492309921f8ca" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.761671 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.761929 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.761948 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.761973 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.761987 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eca56045-0ade-4faf-b0a2-17a4702c1fd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.921619 4740 scope.go:117] "RemoveContainer" containerID="4c3bfd8aefcf7bc397da39e3eee197876d12b6d20b76066b35b9454352f46fa0" Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.925646 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:23:00 crc kubenswrapper[4740]: I0130 16:23:00.941272 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-qglcl"] Jan 30 16:23:01 crc kubenswrapper[4740]: I0130 16:23:01.350143 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" path="/var/lib/kubelet/pods/eca56045-0ade-4faf-b0a2-17a4702c1fd8/volumes" Jan 30 16:23:01 crc kubenswrapper[4740]: I0130 16:23:01.778738 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerID="f1a0b8c784a67e014fb9dd3325ba792ff28a60583dc9c203a583535461e8d5de" exitCode=0 Jan 30 16:23:01 crc kubenswrapper[4740]: I0130 16:23:01.778856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerDied","Data":"f1a0b8c784a67e014fb9dd3325ba792ff28a60583dc9c203a583535461e8d5de"} Jan 30 16:23:02 crc kubenswrapper[4740]: I0130 16:23:02.797862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerStarted","Data":"1201a1f8cda119a4e308a32403e62ed92109434942dce1d1be1b9fe5000bdeaf"} Jan 30 16:23:02 crc kubenswrapper[4740]: I0130 16:23:02.827752 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g2cj6" podStartSLOduration=6.13665947 podStartE2EDuration="10.82772648s" podCreationTimestamp="2026-01-30 16:22:52 +0000 UTC" firstStartedPulling="2026-01-30 16:22:57.675884186 +0000 UTC m=+1626.312946785" lastFinishedPulling="2026-01-30 16:23:02.366951196 +0000 UTC m=+1631.004013795" observedRunningTime="2026-01-30 16:23:02.820724596 +0000 UTC m=+1631.457787205" watchObservedRunningTime="2026-01-30 16:23:02.82772648 +0000 UTC m=+1631.464789079" Jan 30 16:23:02 crc kubenswrapper[4740]: I0130 16:23:02.911470 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:02 crc kubenswrapper[4740]: I0130 16:23:02.912033 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:03 crc kubenswrapper[4740]: I0130 16:23:03.814433 4740 generic.go:334] "Generic (PLEG): container finished" podID="c239553d-1e26-46d3-9487-17a11ad18ad9" containerID="2f5ce6e264a40d5796fae903ead052c7cdfa8bde92eeeff8fabd0dcc83223b14" exitCode=0 Jan 30 16:23:03 crc kubenswrapper[4740]: I0130 16:23:03.814550 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-98dhw" event={"ID":"c239553d-1e26-46d3-9487-17a11ad18ad9","Type":"ContainerDied","Data":"2f5ce6e264a40d5796fae903ead052c7cdfa8bde92eeeff8fabd0dcc83223b14"} Jan 30 16:23:03 crc kubenswrapper[4740]: I0130 16:23:03.967016 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g2cj6" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" probeResult="failure" output=< Jan 30 16:23:03 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:23:03 crc kubenswrapper[4740]: > Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.431004 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.499462 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data\") pod \"c239553d-1e26-46d3-9487-17a11ad18ad9\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.499580 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-456cd\" (UniqueName: \"kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd\") pod \"c239553d-1e26-46d3-9487-17a11ad18ad9\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.499835 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle\") pod \"c239553d-1e26-46d3-9487-17a11ad18ad9\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.499900 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts\") pod \"c239553d-1e26-46d3-9487-17a11ad18ad9\" (UID: \"c239553d-1e26-46d3-9487-17a11ad18ad9\") " Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.527712 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd" (OuterVolumeSpecName: "kube-api-access-456cd") pod "c239553d-1e26-46d3-9487-17a11ad18ad9" (UID: "c239553d-1e26-46d3-9487-17a11ad18ad9"). InnerVolumeSpecName "kube-api-access-456cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.535537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts" (OuterVolumeSpecName: "scripts") pod "c239553d-1e26-46d3-9487-17a11ad18ad9" (UID: "c239553d-1e26-46d3-9487-17a11ad18ad9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.573175 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c239553d-1e26-46d3-9487-17a11ad18ad9" (UID: "c239553d-1e26-46d3-9487-17a11ad18ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.583556 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data" (OuterVolumeSpecName: "config-data") pod "c239553d-1e26-46d3-9487-17a11ad18ad9" (UID: "c239553d-1e26-46d3-9487-17a11ad18ad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.606781 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.606838 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.606849 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c239553d-1e26-46d3-9487-17a11ad18ad9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.606861 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-456cd\" (UniqueName: \"kubernetes.io/projected/c239553d-1e26-46d3-9487-17a11ad18ad9-kube-api-access-456cd\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.843314 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-98dhw" event={"ID":"c239553d-1e26-46d3-9487-17a11ad18ad9","Type":"ContainerDied","Data":"0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be"} Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.843407 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be" Jan 30 16:23:05 crc kubenswrapper[4740]: I0130 16:23:05.843936 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-98dhw" Jan 30 16:23:05 crc kubenswrapper[4740]: E0130 16:23:05.963693 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc239553d_1e26_46d3_9487_17a11ad18ad9.slice/crio-0d828877ab1a52e5314cf8d8bf2670a5b28f4433a0de1cc37667eb7bb14447be\": RecentStats: unable to find data in memory cache]" Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.044647 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.045046 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-log" containerID="cri-o://c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b" gracePeriod=30 Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.045253 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-api" containerID="cri-o://ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda" gracePeriod=30 Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.061823 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.062135 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerName="nova-scheduler-scheduler" containerID="cri-o://690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" gracePeriod=30 Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.864779 4740 generic.go:334] "Generic (PLEG): container finished" podID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerID="c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b" exitCode=143 Jan 30 16:23:06 crc kubenswrapper[4740]: I0130 16:23:06.865280 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerDied","Data":"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b"} Jan 30 16:23:07 crc kubenswrapper[4740]: I0130 16:23:07.878775 4740 generic.go:334] "Generic (PLEG): container finished" podID="df10ecb6-43ad-404c-b51c-b64913bab019" containerID="e5569e41650d8cc725597ad721a090481c7b7f95dafbc9916f0d23819cb3cabc" exitCode=0 Jan 30 16:23:07 crc kubenswrapper[4740]: I0130 16:23:07.878831 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-88k28" event={"ID":"df10ecb6-43ad-404c-b51c-b64913bab019","Type":"ContainerDied","Data":"e5569e41650d8cc725597ad721a090481c7b7f95dafbc9916f0d23819cb3cabc"} Jan 30 16:23:09 crc kubenswrapper[4740]: E0130 16:23:09.178858 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:23:09 crc kubenswrapper[4740]: E0130 16:23:09.183630 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:23:09 crc kubenswrapper[4740]: E0130 16:23:09.184740 4740 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 16:23:09 crc kubenswrapper[4740]: E0130 16:23:09.184775 4740 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerName="nova-scheduler-scheduler" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.414288 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.515897 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data\") pod \"df10ecb6-43ad-404c-b51c-b64913bab019\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.516025 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5jn6\" (UniqueName: \"kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6\") pod \"df10ecb6-43ad-404c-b51c-b64913bab019\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.516069 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle\") pod \"df10ecb6-43ad-404c-b51c-b64913bab019\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.516315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts\") pod \"df10ecb6-43ad-404c-b51c-b64913bab019\" (UID: \"df10ecb6-43ad-404c-b51c-b64913bab019\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.525329 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts" (OuterVolumeSpecName: "scripts") pod "df10ecb6-43ad-404c-b51c-b64913bab019" (UID: "df10ecb6-43ad-404c-b51c-b64913bab019"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.543759 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6" (OuterVolumeSpecName: "kube-api-access-k5jn6") pod "df10ecb6-43ad-404c-b51c-b64913bab019" (UID: "df10ecb6-43ad-404c-b51c-b64913bab019"). InnerVolumeSpecName "kube-api-access-k5jn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.558595 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data" (OuterVolumeSpecName: "config-data") pod "df10ecb6-43ad-404c-b51c-b64913bab019" (UID: "df10ecb6-43ad-404c-b51c-b64913bab019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.574301 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df10ecb6-43ad-404c-b51c-b64913bab019" (UID: "df10ecb6-43ad-404c-b51c-b64913bab019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.620423 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.621411 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.621575 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5jn6\" (UniqueName: \"kubernetes.io/projected/df10ecb6-43ad-404c-b51c-b64913bab019-kube-api-access-k5jn6\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.621817 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df10ecb6-43ad-404c-b51c-b64913bab019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.745661 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.825216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data\") pod \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.825432 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle\") pod \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.825497 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lsv9\" (UniqueName: \"kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9\") pod \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.825551 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs\") pod \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\" (UID: \"1fc7e12f-2618-4590-a9b4-13ec70eceef0\") " Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.826157 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs" (OuterVolumeSpecName: "logs") pod "1fc7e12f-2618-4590-a9b4-13ec70eceef0" (UID: "1fc7e12f-2618-4590-a9b4-13ec70eceef0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.829108 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9" (OuterVolumeSpecName: "kube-api-access-9lsv9") pod "1fc7e12f-2618-4590-a9b4-13ec70eceef0" (UID: "1fc7e12f-2618-4590-a9b4-13ec70eceef0"). InnerVolumeSpecName "kube-api-access-9lsv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.861383 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fc7e12f-2618-4590-a9b4-13ec70eceef0" (UID: "1fc7e12f-2618-4590-a9b4-13ec70eceef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.861817 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data" (OuterVolumeSpecName: "config-data") pod "1fc7e12f-2618-4590-a9b4-13ec70eceef0" (UID: "1fc7e12f-2618-4590-a9b4-13ec70eceef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.909064 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-88k28" event={"ID":"df10ecb6-43ad-404c-b51c-b64913bab019","Type":"ContainerDied","Data":"cd7e7ebb7e18fc2c122df30948b863b61e0ef7169d07a36b39e16465b304a7f3"} Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.909120 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7e7ebb7e18fc2c122df30948b863b61e0ef7169d07a36b39e16465b304a7f3" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.909080 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-88k28" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.936319 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lsv9\" (UniqueName: \"kubernetes.io/projected/1fc7e12f-2618-4590-a9b4-13ec70eceef0-kube-api-access-9lsv9\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.936398 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fc7e12f-2618-4590-a9b4-13ec70eceef0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.936414 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.936428 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc7e12f-2618-4590-a9b4-13ec70eceef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.944889 4740 generic.go:334] "Generic (PLEG): container finished" podID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerID="ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda" exitCode=0 Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.944933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerDied","Data":"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda"} Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.944968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1fc7e12f-2618-4590-a9b4-13ec70eceef0","Type":"ContainerDied","Data":"13595dde6ebabec458a0fa8ea3636e849164bea079cd7226f1064b6a032638e2"} Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.944989 4740 scope.go:117] "RemoveContainer" containerID="ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda" Jan 30 16:23:09 crc kubenswrapper[4740]: I0130 16:23:09.945172 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.009640 4740 scope.go:117] "RemoveContainer" containerID="c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.016749 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.073185 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.109542 4740 scope.go:117] "RemoveContainer" containerID="ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.110820 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda\": container with ID starting with ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda not found: ID does not exist" containerID="ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.110889 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda"} err="failed to get container status \"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda\": rpc error: code = NotFound desc = could not find container \"ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda\": container with ID starting with ab97a20fa003ec24a770933c0662de6bf403a5a57cf70ec3adcb2c8ad21accda not found: ID does not exist" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.110924 4740 scope.go:117] "RemoveContainer" containerID="c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.111732 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b\": container with ID starting with c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b not found: ID does not exist" containerID="c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.111762 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b"} err="failed to get container status \"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b\": rpc error: code = NotFound desc = could not find container \"c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b\": container with ID starting with c59fcd2400b3a3a87490336886456137df4c0c080e718e22fbfd4c09fa314f4b not found: ID does not exist" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.117470 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118238 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="dnsmasq-dns" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118272 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="dnsmasq-dns" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118301 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df10ecb6-43ad-404c-b51c-b64913bab019" containerName="nova-cell1-conductor-db-sync" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118309 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="df10ecb6-43ad-404c-b51c-b64913bab019" containerName="nova-cell1-conductor-db-sync" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118326 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-api" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118333 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-api" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118343 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="init" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118364 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="init" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118393 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c239553d-1e26-46d3-9487-17a11ad18ad9" containerName="nova-manage" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118399 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c239553d-1e26-46d3-9487-17a11ad18ad9" containerName="nova-manage" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.118411 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-log" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118418 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-log" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118677 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-api" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118702 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca56045-0ade-4faf-b0a2-17a4702c1fd8" containerName="dnsmasq-dns" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118711 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="df10ecb6-43ad-404c-b51c-b64913bab019" containerName="nova-cell1-conductor-db-sync" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118729 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c239553d-1e26-46d3-9487-17a11ad18ad9" containerName="nova-manage" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.118740 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" containerName="nova-api-log" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.119837 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.123734 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.136975 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.148789 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.156208 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.165446 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.185953 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcbr\" (UniqueName: \"kubernetes.io/projected/8372d763-1fed-4ff1-a573-ae34f6758115-kube-api-access-sqcbr\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.186009 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcchj\" (UniqueName: \"kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.186049 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.186396 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.186679 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.186853 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.187006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.194490 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289316 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289406 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289461 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289515 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqcbr\" (UniqueName: \"kubernetes.io/projected/8372d763-1fed-4ff1-a573-ae34f6758115-kube-api-access-sqcbr\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289537 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcchj\" (UniqueName: \"kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289562 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.289620 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.293136 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.295842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.297207 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8372d763-1fed-4ff1-a573-ae34f6758115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.301536 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.312078 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.321145 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcchj\" (UniqueName: \"kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj\") pod \"nova-api-0\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.326461 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqcbr\" (UniqueName: \"kubernetes.io/projected/8372d763-1fed-4ff1-a573-ae34f6758115-kube-api-access-sqcbr\") pod \"nova-cell1-conductor-0\" (UID: \"8372d763-1fed-4ff1-a573-ae34f6758115\") " pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.453601 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.484851 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.629226 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.704971 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsqhp\" (UniqueName: \"kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp\") pod \"c0199afa-1963-4ece-bcbc-272b76d500a0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.705146 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data\") pod \"c0199afa-1963-4ece-bcbc-272b76d500a0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.705435 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle\") pod \"c0199afa-1963-4ece-bcbc-272b76d500a0\" (UID: \"c0199afa-1963-4ece-bcbc-272b76d500a0\") " Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.711048 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp" (OuterVolumeSpecName: "kube-api-access-nsqhp") pod "c0199afa-1963-4ece-bcbc-272b76d500a0" (UID: "c0199afa-1963-4ece-bcbc-272b76d500a0"). InnerVolumeSpecName "kube-api-access-nsqhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.748883 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data" (OuterVolumeSpecName: "config-data") pod "c0199afa-1963-4ece-bcbc-272b76d500a0" (UID: "c0199afa-1963-4ece-bcbc-272b76d500a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.760433 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0199afa-1963-4ece-bcbc-272b76d500a0" (UID: "c0199afa-1963-4ece-bcbc-272b76d500a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.815072 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsqhp\" (UniqueName: \"kubernetes.io/projected/c0199afa-1963-4ece-bcbc-272b76d500a0-kube-api-access-nsqhp\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.815130 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.815144 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0199afa-1963-4ece-bcbc-272b76d500a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.960149 4740 generic.go:334] "Generic (PLEG): container finished" podID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" exitCode=0 Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.960206 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c0199afa-1963-4ece-bcbc-272b76d500a0","Type":"ContainerDied","Data":"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a"} Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.960247 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c0199afa-1963-4ece-bcbc-272b76d500a0","Type":"ContainerDied","Data":"ff3bef220291b9fc1bf77199bcf1c2adc53a86f6b071976447aacc8759fdcead"} Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.960239 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.960268 4740 scope.go:117] "RemoveContainer" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.994604 4740 scope.go:117] "RemoveContainer" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" Jan 30 16:23:10 crc kubenswrapper[4740]: E0130 16:23:10.995773 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a\": container with ID starting with 690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a not found: ID does not exist" containerID="690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a" Jan 30 16:23:10 crc kubenswrapper[4740]: I0130 16:23:10.995804 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a"} err="failed to get container status \"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a\": rpc error: code = NotFound desc = could not find container \"690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a\": container with ID starting with 690a1f29d3115db6e7bb1575a35ada5b5684ab8eb57b26fbff4ec895457f6d6a not found: ID does not exist" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.023888 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.041630 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.071014 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.094387 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: E0130 16:23:11.095567 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerName="nova-scheduler-scheduler" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.095602 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerName="nova-scheduler-scheduler" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.096166 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" containerName="nova-scheduler-scheduler" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.098163 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.105186 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.109613 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.127198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.127298 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.127459 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2fb\" (UniqueName: \"kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.128595 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.231073 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2fb\" (UniqueName: \"kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.231324 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.231430 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.238500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.241237 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.255104 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2fb\" (UniqueName: \"kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb\") pod \"nova-scheduler-0\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.335514 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:23:11 crc kubenswrapper[4740]: E0130 16:23:11.336454 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.363570 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc7e12f-2618-4590-a9b4-13ec70eceef0" path="/var/lib/kubelet/pods/1fc7e12f-2618-4590-a9b4-13ec70eceef0/volumes" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.364272 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0199afa-1963-4ece-bcbc-272b76d500a0" path="/var/lib/kubelet/pods/c0199afa-1963-4ece-bcbc-272b76d500a0/volumes" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.428720 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.920302 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.973894 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc","Type":"ContainerStarted","Data":"fbe3c5bce19bf95cafd44844885bbb6f6616b38a9bc9e08e194a55b016857214"} Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.978172 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8372d763-1fed-4ff1-a573-ae34f6758115","Type":"ContainerStarted","Data":"7ffe2142317d9c08878320b1a34d49c685a115d6a45fd7b8f653683d373463bd"} Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.978221 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8372d763-1fed-4ff1-a573-ae34f6758115","Type":"ContainerStarted","Data":"d362f194b8430f68d2638a9943954ad1f457cf464c3e7e89c0e0b8cb95e9dca0"} Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.980050 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.984191 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerStarted","Data":"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0"} Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.984226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerStarted","Data":"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02"} Jan 30 16:23:11 crc kubenswrapper[4740]: I0130 16:23:11.984236 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerStarted","Data":"1017a0b5ec32876480f24b59d89172880e27c6a6f563674aa7412e84e1201b55"} Jan 30 16:23:12 crc kubenswrapper[4740]: I0130 16:23:12.005294 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.005255268 podStartE2EDuration="3.005255268s" podCreationTimestamp="2026-01-30 16:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:11.99728117 +0000 UTC m=+1640.634343769" watchObservedRunningTime="2026-01-30 16:23:12.005255268 +0000 UTC m=+1640.642317887" Jan 30 16:23:12 crc kubenswrapper[4740]: I0130 16:23:12.034335 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.034260668 podStartE2EDuration="3.034260668s" podCreationTimestamp="2026-01-30 16:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:12.021648395 +0000 UTC m=+1640.658710994" watchObservedRunningTime="2026-01-30 16:23:12.034260668 +0000 UTC m=+1640.671323267" Jan 30 16:23:13 crc kubenswrapper[4740]: I0130 16:23:13.001341 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc","Type":"ContainerStarted","Data":"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20"} Jan 30 16:23:13 crc kubenswrapper[4740]: I0130 16:23:13.037921 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.037888105 podStartE2EDuration="2.037888105s" podCreationTimestamp="2026-01-30 16:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:13.023272712 +0000 UTC m=+1641.660335311" watchObservedRunningTime="2026-01-30 16:23:13.037888105 +0000 UTC m=+1641.674950704" Jan 30 16:23:13 crc kubenswrapper[4740]: I0130 16:23:13.973374 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g2cj6" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" probeResult="failure" output=< Jan 30 16:23:13 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:23:13 crc kubenswrapper[4740]: > Jan 30 16:23:16 crc kubenswrapper[4740]: I0130 16:23:16.429561 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:23:20 crc kubenswrapper[4740]: I0130 16:23:20.485853 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:23:20 crc kubenswrapper[4740]: I0130 16:23:20.486778 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:23:20 crc kubenswrapper[4740]: I0130 16:23:20.488597 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 16:23:21 crc kubenswrapper[4740]: I0130 16:23:21.429844 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:23:21 crc kubenswrapper[4740]: I0130 16:23:21.476488 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:23:21 crc kubenswrapper[4740]: I0130 16:23:21.569598 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:21 crc kubenswrapper[4740]: I0130 16:23:21.569647 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.622965 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:23:22 crc kubenswrapper[4740]: E0130 16:23:22.623308 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.698894 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.703751 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.716657 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.757823 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.828123 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.828181 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd48h\" (UniqueName: \"kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.828239 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.930717 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.930798 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd48h\" (UniqueName: \"kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.930847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.931448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.931701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.957056 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd48h\" (UniqueName: \"kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h\") pod \"community-operators-tmbnk\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:22 crc kubenswrapper[4740]: I0130 16:23:22.977264 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:23 crc kubenswrapper[4740]: I0130 16:23:23.047610 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:23 crc kubenswrapper[4740]: I0130 16:23:23.077875 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:23 crc kubenswrapper[4740]: I0130 16:23:23.818912 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:24 crc kubenswrapper[4740]: I0130 16:23:24.672111 4740 generic.go:334] "Generic (PLEG): container finished" podID="91ed5442-418e-464d-a02c-f5c84075e45d" containerID="a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527" exitCode=0 Jan 30 16:23:24 crc kubenswrapper[4740]: I0130 16:23:24.672609 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerDied","Data":"a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527"} Jan 30 16:23:24 crc kubenswrapper[4740]: I0130 16:23:24.672650 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerStarted","Data":"adb38d0738196e4b168e641492bfa81a87358f1a96531cd508c039d9b7290652"} Jan 30 16:23:25 crc kubenswrapper[4740]: I0130 16:23:25.350462 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:23:25 crc kubenswrapper[4740]: I0130 16:23:25.351188 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g2cj6" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" containerID="cri-o://1201a1f8cda119a4e308a32403e62ed92109434942dce1d1be1b9fe5000bdeaf" gracePeriod=2 Jan 30 16:23:25 crc kubenswrapper[4740]: I0130 16:23:25.708273 4740 generic.go:334] "Generic (PLEG): container finished" podID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerID="1201a1f8cda119a4e308a32403e62ed92109434942dce1d1be1b9fe5000bdeaf" exitCode=0 Jan 30 16:23:25 crc kubenswrapper[4740]: I0130 16:23:25.708331 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerDied","Data":"1201a1f8cda119a4e308a32403e62ed92109434942dce1d1be1b9fe5000bdeaf"} Jan 30 16:23:25 crc kubenswrapper[4740]: I0130 16:23:25.938717 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.016671 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities\") pod \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.017360 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhrg9\" (UniqueName: \"kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9\") pod \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.017598 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content\") pod \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\" (UID: \"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2\") " Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.018124 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities" (OuterVolumeSpecName: "utilities") pod "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" (UID: "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.018710 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.027028 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9" (OuterVolumeSpecName: "kube-api-access-lhrg9") pod "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" (UID: "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2"). InnerVolumeSpecName "kube-api-access-lhrg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.059746 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" (UID: "9e2b8175-bad8-4c41-9bf6-39a35c77e2b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.128935 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.128996 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhrg9\" (UniqueName: \"kubernetes.io/projected/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2-kube-api-access-lhrg9\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.730034 4740 generic.go:334] "Generic (PLEG): container finished" podID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" containerID="5fdfbd1770f95f346a0ab41f9ff9e297c076907cc69b48736e5aee5e70f2dbc6" exitCode=137 Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.730230 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b543a20-70dd-45ec-8141-4bff06d6d0ce","Type":"ContainerDied","Data":"5fdfbd1770f95f346a0ab41f9ff9e297c076907cc69b48736e5aee5e70f2dbc6"} Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.735809 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2cj6" event={"ID":"9e2b8175-bad8-4c41-9bf6-39a35c77e2b2","Type":"ContainerDied","Data":"76cd319389155d9b794b166f7f4b73f3811bbb96fb657a118ecfc949ccef974e"} Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.735879 4740 scope.go:117] "RemoveContainer" containerID="1201a1f8cda119a4e308a32403e62ed92109434942dce1d1be1b9fe5000bdeaf" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.736133 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2cj6" Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.740586 4740 generic.go:334] "Generic (PLEG): container finished" podID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerID="1422d4b1236292f82fa2b4ffad401e08095e4e4483b2c3bd356ee4cd68a30326" exitCode=137 Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.740675 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerDied","Data":"1422d4b1236292f82fa2b4ffad401e08095e4e4483b2c3bd356ee4cd68a30326"} Jan 30 16:23:26 crc kubenswrapper[4740]: I0130 16:23:26.745085 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerStarted","Data":"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501"} Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:26.823296 4740 scope.go:117] "RemoveContainer" containerID="f1a0b8c784a67e014fb9dd3325ba792ff28a60583dc9c203a583535461e8d5de" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:26.841569 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:26.853636 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2cj6"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:26.854517 4740 scope.go:117] "RemoveContainer" containerID="8fa7fb5a3fb61465f2a6d910d2a714b5bfd56cfb6e635e68f25a2ebe59262438" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.160156 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.255275 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle\") pod \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.255665 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data\") pod \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.255805 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvvx8\" (UniqueName: \"kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8\") pod \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\" (UID: \"0b543a20-70dd-45ec-8141-4bff06d6d0ce\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.268505 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8" (OuterVolumeSpecName: "kube-api-access-rvvx8") pod "0b543a20-70dd-45ec-8141-4bff06d6d0ce" (UID: "0b543a20-70dd-45ec-8141-4bff06d6d0ce"). InnerVolumeSpecName "kube-api-access-rvvx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.302480 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data" (OuterVolumeSpecName: "config-data") pod "0b543a20-70dd-45ec-8141-4bff06d6d0ce" (UID: "0b543a20-70dd-45ec-8141-4bff06d6d0ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.322989 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b543a20-70dd-45ec-8141-4bff06d6d0ce" (UID: "0b543a20-70dd-45ec-8141-4bff06d6d0ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.372390 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.372445 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvvx8\" (UniqueName: \"kubernetes.io/projected/0b543a20-70dd-45ec-8141-4bff06d6d0ce-kube-api-access-rvvx8\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.372461 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b543a20-70dd-45ec-8141-4bff06d6d0ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.374548 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" path="/var/lib/kubelet/pods/9e2b8175-bad8-4c41-9bf6-39a35c77e2b2/volumes" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.630237 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.679176 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq4lx\" (UniqueName: \"kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx\") pod \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.679957 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data\") pod \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.680092 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle\") pod \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.680214 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs\") pod \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\" (UID: \"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d\") " Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.681049 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs" (OuterVolumeSpecName: "logs") pod "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" (UID: "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.681875 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.683705 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx" (OuterVolumeSpecName: "kube-api-access-bq4lx") pod "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" (UID: "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d"). InnerVolumeSpecName "kube-api-access-bq4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.710229 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data" (OuterVolumeSpecName: "config-data") pod "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" (UID: "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.712141 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" (UID: "cf2f4c9c-d1eb-4994-a199-a1ff7f53660d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.757870 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.757854 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf2f4c9c-d1eb-4994-a199-a1ff7f53660d","Type":"ContainerDied","Data":"1540cbedc5935b29d73732c3df4316c7d91eeb1d69d4a95f467ac05f35ed58c4"} Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.758062 4740 scope.go:117] "RemoveContainer" containerID="1422d4b1236292f82fa2b4ffad401e08095e4e4483b2c3bd356ee4cd68a30326" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.763108 4740 generic.go:334] "Generic (PLEG): container finished" podID="91ed5442-418e-464d-a02c-f5c84075e45d" containerID="c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501" exitCode=0 Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.763246 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerDied","Data":"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501"} Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.765811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b543a20-70dd-45ec-8141-4bff06d6d0ce","Type":"ContainerDied","Data":"21e49f1e4617596db3e4865792a36b10ed3a53c183c27fdfa0abdc4c07efa159"} Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.766042 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.786665 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq4lx\" (UniqueName: \"kubernetes.io/projected/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-kube-api-access-bq4lx\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.786735 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.786746 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.830651 4740 scope.go:117] "RemoveContainer" containerID="68720c8de9af007c6d359430808d9a91d8d701fd3ace3dbfa563c40b74a32e33" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.833199 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.845824 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.861552 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.874611 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886092 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.886844 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886870 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.886888 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-metadata" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886896 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-metadata" Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.886911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="extract-utilities" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886920 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="extract-utilities" Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.886948 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-log" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886954 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-log" Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.886986 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.886995 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:23:27 crc kubenswrapper[4740]: E0130 16:23:27.887010 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="extract-content" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.887018 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="extract-content" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.887309 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2b8175-bad8-4c41-9bf6-39a35c77e2b2" containerName="registry-server" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.887339 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-metadata" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.887362 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.887376 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" containerName="nova-metadata-log" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.888476 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.891009 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.891300 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.893471 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.909868 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.922268 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.922501 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.925965 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.926389 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.929194 4740 scope.go:117] "RemoveContainer" containerID="5fdfbd1770f95f346a0ab41f9ff9e297c076907cc69b48736e5aee5e70f2dbc6" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.940380 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996567 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996619 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996643 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996688 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25s86\" (UniqueName: \"kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996728 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996780 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7ml\" (UniqueName: \"kubernetes.io/projected/ae10fd41-d2ed-4133-a41f-ecab597498fa-kube-api-access-5b7ml\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996884 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.996969 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.997000 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:27 crc kubenswrapper[4740]: I0130 16:23:27.997025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099539 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099657 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099688 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099711 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099786 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099805 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25s86\" (UniqueName: \"kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099831 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.099869 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7ml\" (UniqueName: \"kubernetes.io/projected/ae10fd41-d2ed-4133-a41f-ecab597498fa-kube-api-access-5b7ml\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.101097 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.107661 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.108064 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.110480 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.111373 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.111471 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.112800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.119196 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7ml\" (UniqueName: \"kubernetes.io/projected/ae10fd41-d2ed-4133-a41f-ecab597498fa-kube-api-access-5b7ml\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.120108 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae10fd41-d2ed-4133-a41f-ecab597498fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae10fd41-d2ed-4133-a41f-ecab597498fa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.124882 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25s86\" (UniqueName: \"kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86\") pod \"nova-metadata-0\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.213444 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.246654 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.775044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.791799 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae10fd41-d2ed-4133-a41f-ecab597498fa","Type":"ContainerStarted","Data":"acacaac1040510d469e349ed6da32bdac329a35cfb87bd06fc16973125aa8263"} Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.801070 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerStarted","Data":"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf"} Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.844406 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:28 crc kubenswrapper[4740]: I0130 16:23:28.846002 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmbnk" podStartSLOduration=3.209468755 podStartE2EDuration="6.845976853s" podCreationTimestamp="2026-01-30 16:23:22 +0000 UTC" firstStartedPulling="2026-01-30 16:23:24.675670757 +0000 UTC m=+1653.312733356" lastFinishedPulling="2026-01-30 16:23:28.312178835 +0000 UTC m=+1656.949241454" observedRunningTime="2026-01-30 16:23:28.828118809 +0000 UTC m=+1657.465181428" watchObservedRunningTime="2026-01-30 16:23:28.845976853 +0000 UTC m=+1657.483039452" Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.355214 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b543a20-70dd-45ec-8141-4bff06d6d0ce" path="/var/lib/kubelet/pods/0b543a20-70dd-45ec-8141-4bff06d6d0ce/volumes" Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.356753 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2f4c9c-d1eb-4994-a199-a1ff7f53660d" path="/var/lib/kubelet/pods/cf2f4c9c-d1eb-4994-a199-a1ff7f53660d/volumes" Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.820052 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae10fd41-d2ed-4133-a41f-ecab597498fa","Type":"ContainerStarted","Data":"e7ad4c793b9a61e645ec67fd454f094334a82bb6ffc1a06a2eaaed3336c41e99"} Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.825863 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerStarted","Data":"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed"} Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.825933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerStarted","Data":"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb"} Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.825945 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerStarted","Data":"3da4c7f1712190c690028ea57bac904273dd9178fccb8732342c0a1580d240d6"} Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.852632 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.852597623 podStartE2EDuration="2.852597623s" podCreationTimestamp="2026-01-30 16:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:29.839301323 +0000 UTC m=+1658.476363922" watchObservedRunningTime="2026-01-30 16:23:29.852597623 +0000 UTC m=+1658.489660222" Jan 30 16:23:29 crc kubenswrapper[4740]: I0130 16:23:29.863281 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.863256788 podStartE2EDuration="2.863256788s" podCreationTimestamp="2026-01-30 16:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:29.8588994 +0000 UTC m=+1658.495961999" watchObservedRunningTime="2026-01-30 16:23:29.863256788 +0000 UTC m=+1658.500319387" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.491756 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.493156 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.497053 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.508008 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.838004 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.843479 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:23:30 crc kubenswrapper[4740]: I0130 16:23:30.980622 4740 scope.go:117] "RemoveContainer" containerID="ef40eb91da1e9153003a0ea45570f3a711bb6a5f834982a83efd5f0810b385c2" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.146194 4740 scope.go:117] "RemoveContainer" containerID="ff8773bcdc98b06479c05b63272fb5a64ab27e6e6a2e3e085e46be38175afd2d" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.163258 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.165979 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.191385 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.240250 4740 scope.go:117] "RemoveContainer" containerID="3035cc0d3b0b95d327c049822f58a321542e9893a907a300910038c186052482" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.303879 4740 scope.go:117] "RemoveContainer" containerID="c335912e6c8a515bf6d7a0922c3bee7b7df4d7b7849a3545ac13ad96a0fcb55c" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.311833 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.311899 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.312022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vqz4\" (UniqueName: \"kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.312060 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.312163 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.312402 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.415946 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.416522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.416544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vqz4\" (UniqueName: \"kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.416618 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.416741 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.416970 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.417585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.417981 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.418335 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.418790 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.419439 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.449481 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vqz4\" (UniqueName: \"kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4\") pod \"dnsmasq-dns-54dd998c-sv6kp\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:31 crc kubenswrapper[4740]: I0130 16:23:31.567778 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:32 crc kubenswrapper[4740]: I0130 16:23:32.234278 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:23:32 crc kubenswrapper[4740]: I0130 16:23:32.953550 4740 generic.go:334] "Generic (PLEG): container finished" podID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerID="740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873" exitCode=0 Jan 30 16:23:32 crc kubenswrapper[4740]: I0130 16:23:32.954295 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" event={"ID":"82b4d3c6-f45f-4677-8369-ee9a6f1746b7","Type":"ContainerDied","Data":"740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873"} Jan 30 16:23:32 crc kubenswrapper[4740]: I0130 16:23:32.954389 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" event={"ID":"82b4d3c6-f45f-4677-8369-ee9a6f1746b7","Type":"ContainerStarted","Data":"ce427d2d38a04e0d66835ab0f37e64418e8bf1ffbe6eb378064607bba7c3b937"} Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.078834 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.078917 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.181129 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.216502 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.247506 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.247573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.698507 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.699258 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-notification-agent" containerID="cri-o://1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2" gracePeriod=30 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.699233 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="proxy-httpd" containerID="cri-o://bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2" gracePeriod=30 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.699295 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="sg-core" containerID="cri-o://fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297" gracePeriod=30 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.699161 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-central-agent" containerID="cri-o://15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4" gracePeriod=30 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.805579 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.965307 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" event={"ID":"82b4d3c6-f45f-4677-8369-ee9a6f1746b7","Type":"ContainerStarted","Data":"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05"} Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.965612 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970281 4740 generic.go:334] "Generic (PLEG): container finished" podID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerID="bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2" exitCode=0 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970313 4740 generic.go:334] "Generic (PLEG): container finished" podID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerID="fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297" exitCode=2 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970370 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerDied","Data":"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2"} Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970417 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerDied","Data":"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297"} Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970779 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-log" containerID="cri-o://2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02" gracePeriod=30 Jan 30 16:23:33 crc kubenswrapper[4740]: I0130 16:23:33.970815 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-api" containerID="cri-o://15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0" gracePeriod=30 Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.008893 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" podStartSLOduration=3.008866481 podStartE2EDuration="3.008866481s" podCreationTimestamp="2026-01-30 16:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:33.992163686 +0000 UTC m=+1662.629226285" watchObservedRunningTime="2026-01-30 16:23:34.008866481 +0000 UTC m=+1662.645929080" Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.045503 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.138466 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.983113 4740 generic.go:334] "Generic (PLEG): container finished" podID="3715417d-ffd8-44be-83fb-49298212ff8b" containerID="2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02" exitCode=143 Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.983156 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerDied","Data":"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02"} Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.986577 4740 generic.go:334] "Generic (PLEG): container finished" podID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerID="15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4" exitCode=0 Jan 30 16:23:34 crc kubenswrapper[4740]: I0130 16:23:34.986646 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerDied","Data":"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4"} Jan 30 16:23:35 crc kubenswrapper[4740]: I0130 16:23:35.996779 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tmbnk" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="registry-server" containerID="cri-o://3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf" gracePeriod=2 Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.698555 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.762033 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd48h\" (UniqueName: \"kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h\") pod \"91ed5442-418e-464d-a02c-f5c84075e45d\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.762319 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities\") pod \"91ed5442-418e-464d-a02c-f5c84075e45d\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.762536 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content\") pod \"91ed5442-418e-464d-a02c-f5c84075e45d\" (UID: \"91ed5442-418e-464d-a02c-f5c84075e45d\") " Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.763572 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities" (OuterVolumeSpecName: "utilities") pod "91ed5442-418e-464d-a02c-f5c84075e45d" (UID: "91ed5442-418e-464d-a02c-f5c84075e45d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.768977 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h" (OuterVolumeSpecName: "kube-api-access-fd48h") pod "91ed5442-418e-464d-a02c-f5c84075e45d" (UID: "91ed5442-418e-464d-a02c-f5c84075e45d"). InnerVolumeSpecName "kube-api-access-fd48h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.834000 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91ed5442-418e-464d-a02c-f5c84075e45d" (UID: "91ed5442-418e-464d-a02c-f5c84075e45d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.865840 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.866327 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ed5442-418e-464d-a02c-f5c84075e45d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:36 crc kubenswrapper[4740]: I0130 16:23:36.866408 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd48h\" (UniqueName: \"kubernetes.io/projected/91ed5442-418e-464d-a02c-f5c84075e45d-kube-api-access-fd48h\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.010722 4740 generic.go:334] "Generic (PLEG): container finished" podID="91ed5442-418e-464d-a02c-f5c84075e45d" containerID="3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf" exitCode=0 Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.010788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerDied","Data":"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf"} Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.010825 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmbnk" event={"ID":"91ed5442-418e-464d-a02c-f5c84075e45d","Type":"ContainerDied","Data":"adb38d0738196e4b168e641492bfa81a87358f1a96531cd508c039d9b7290652"} Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.010848 4740 scope.go:117] "RemoveContainer" containerID="3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.010848 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmbnk" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.036014 4740 scope.go:117] "RemoveContainer" containerID="c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.050833 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.063145 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tmbnk"] Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.064479 4740 scope.go:117] "RemoveContainer" containerID="a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.133469 4740 scope.go:117] "RemoveContainer" containerID="3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf" Jan 30 16:23:37 crc kubenswrapper[4740]: E0130 16:23:37.136986 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf\": container with ID starting with 3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf not found: ID does not exist" containerID="3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.137055 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf"} err="failed to get container status \"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf\": rpc error: code = NotFound desc = could not find container \"3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf\": container with ID starting with 3b0ca1d27ad71b916a720fa20713ae12bbf36e9985a2ae992edc8244f81d47bf not found: ID does not exist" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.137112 4740 scope.go:117] "RemoveContainer" containerID="c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501" Jan 30 16:23:37 crc kubenswrapper[4740]: E0130 16:23:37.137732 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501\": container with ID starting with c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501 not found: ID does not exist" containerID="c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.137865 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501"} err="failed to get container status \"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501\": rpc error: code = NotFound desc = could not find container \"c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501\": container with ID starting with c87669f0cf91973a5fa39641e52e6ec5704afeb2306eac9f661811dd36bc6501 not found: ID does not exist" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.137993 4740 scope.go:117] "RemoveContainer" containerID="a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527" Jan 30 16:23:37 crc kubenswrapper[4740]: E0130 16:23:37.138529 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527\": container with ID starting with a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527 not found: ID does not exist" containerID="a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.138588 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527"} err="failed to get container status \"a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527\": rpc error: code = NotFound desc = could not find container \"a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527\": container with ID starting with a2e3e7123b17398926c0c4bf54686abd67d633ed750f34b98a1266f30766d527 not found: ID does not exist" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.335730 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:23:37 crc kubenswrapper[4740]: E0130 16:23:37.336496 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.365154 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" path="/var/lib/kubelet/pods/91ed5442-418e-464d-a02c-f5c84075e45d/volumes" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.714419 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.792180 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data\") pod \"3715417d-ffd8-44be-83fb-49298212ff8b\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.792240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle\") pod \"3715417d-ffd8-44be-83fb-49298212ff8b\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.792541 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs\") pod \"3715417d-ffd8-44be-83fb-49298212ff8b\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.792602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcchj\" (UniqueName: \"kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj\") pod \"3715417d-ffd8-44be-83fb-49298212ff8b\" (UID: \"3715417d-ffd8-44be-83fb-49298212ff8b\") " Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.793222 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs" (OuterVolumeSpecName: "logs") pod "3715417d-ffd8-44be-83fb-49298212ff8b" (UID: "3715417d-ffd8-44be-83fb-49298212ff8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.807774 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj" (OuterVolumeSpecName: "kube-api-access-mcchj") pod "3715417d-ffd8-44be-83fb-49298212ff8b" (UID: "3715417d-ffd8-44be-83fb-49298212ff8b"). InnerVolumeSpecName "kube-api-access-mcchj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.834916 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data" (OuterVolumeSpecName: "config-data") pod "3715417d-ffd8-44be-83fb-49298212ff8b" (UID: "3715417d-ffd8-44be-83fb-49298212ff8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.840171 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3715417d-ffd8-44be-83fb-49298212ff8b" (UID: "3715417d-ffd8-44be-83fb-49298212ff8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.897203 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.897238 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3715417d-ffd8-44be-83fb-49298212ff8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.897251 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3715417d-ffd8-44be-83fb-49298212ff8b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:37 crc kubenswrapper[4740]: I0130 16:23:37.897263 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcchj\" (UniqueName: \"kubernetes.io/projected/3715417d-ffd8-44be-83fb-49298212ff8b-kube-api-access-mcchj\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.031201 4740 generic.go:334] "Generic (PLEG): container finished" podID="3715417d-ffd8-44be-83fb-49298212ff8b" containerID="15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0" exitCode=0 Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.031403 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerDied","Data":"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0"} Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.031788 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3715417d-ffd8-44be-83fb-49298212ff8b","Type":"ContainerDied","Data":"1017a0b5ec32876480f24b59d89172880e27c6a6f563674aa7412e84e1201b55"} Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.031828 4740 scope.go:117] "RemoveContainer" containerID="15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.031541 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.071880 4740 scope.go:117] "RemoveContainer" containerID="2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.074338 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.095897 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.112630 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.113307 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-log" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113326 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-log" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.113370 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="extract-content" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113384 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="extract-content" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.113403 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-api" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113410 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-api" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.113427 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="extract-utilities" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113434 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="extract-utilities" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.113454 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="registry-server" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113460 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="registry-server" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113718 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-log" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113735 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ed5442-418e-464d-a02c-f5c84075e45d" containerName="registry-server" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.113759 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" containerName="nova-api-api" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.115125 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.120008 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.120392 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.120461 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.121130 4740 scope.go:117] "RemoveContainer" containerID="15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.121911 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0\": container with ID starting with 15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0 not found: ID does not exist" containerID="15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.121950 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0"} err="failed to get container status \"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0\": rpc error: code = NotFound desc = could not find container \"15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0\": container with ID starting with 15939353ea3b4ed73a1658654b9c6f4a3397229f83b3cc1ad0f62488f585a2c0 not found: ID does not exist" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.122033 4740 scope.go:117] "RemoveContainer" containerID="2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02" Jan 30 16:23:38 crc kubenswrapper[4740]: E0130 16:23:38.122611 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02\": container with ID starting with 2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02 not found: ID does not exist" containerID="2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.122690 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02"} err="failed to get container status \"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02\": rpc error: code = NotFound desc = could not find container \"2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02\": container with ID starting with 2f192a6812d880abcf9baa6eece302401fad04c953cc5fb1f6b79095fea3db02 not found: ID does not exist" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.136082 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206384 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767xq\" (UniqueName: \"kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206673 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206720 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206774 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.206821 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.215057 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.243212 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.246872 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.247176 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.309445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767xq\" (UniqueName: \"kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.309745 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.309792 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.309888 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.310391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.310552 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.310598 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.315120 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.315287 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.316756 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.330001 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.333709 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767xq\" (UniqueName: \"kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq\") pod \"nova-api-0\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " pod="openstack/nova-api-0" Jan 30 16:23:38 crc kubenswrapper[4740]: I0130 16:23:38.504427 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.069738 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.265808 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.265944 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.289091 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9g9ww"] Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.291407 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.295181 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.296414 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.321097 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9g9ww"] Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.374822 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.387393 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.395939 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxtf\" (UniqueName: \"kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.396094 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.463975 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3715417d-ffd8-44be-83fb-49298212ff8b" path="/var/lib/kubelet/pods/3715417d-ffd8-44be-83fb-49298212ff8b/volumes" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.480945 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.501135 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.504910 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.505026 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxtf\" (UniqueName: \"kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.505152 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.513974 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.514023 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.515057 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.528701 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxtf\" (UniqueName: \"kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf\") pod \"nova-cell1-cell-mapping-9g9ww\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:39 crc kubenswrapper[4740]: I0130 16:23:39.641107 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:40 crc kubenswrapper[4740]: I0130 16:23:40.084551 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerStarted","Data":"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9"} Jan 30 16:23:40 crc kubenswrapper[4740]: I0130 16:23:40.085479 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerStarted","Data":"c1cbafa83e2be52db201622dafe2271176fc2fc95f0ffb61fda9e60f721fa0eb"} Jan 30 16:23:40 crc kubenswrapper[4740]: W0130 16:23:40.185220 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0da4cd8_4245_4e9d_bafb_5054e6ed647c.slice/crio-02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe WatchSource:0}: Error finding container 02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe: Status 404 returned error can't find the container with id 02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe Jan 30 16:23:40 crc kubenswrapper[4740]: I0130 16:23:40.190246 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9g9ww"] Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.096715 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerStarted","Data":"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305"} Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.098881 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9g9ww" event={"ID":"f0da4cd8-4245-4e9d-bafb-5054e6ed647c","Type":"ContainerStarted","Data":"1b8bd308acaad5bda22ba9cdf85eea3009e8f5b0a835b284c23faf9856264ed4"} Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.098919 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9g9ww" event={"ID":"f0da4cd8-4245-4e9d-bafb-5054e6ed647c","Type":"ContainerStarted","Data":"02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe"} Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.125787 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.12575082 podStartE2EDuration="3.12575082s" podCreationTimestamp="2026-01-30 16:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:41.121070513 +0000 UTC m=+1669.758133132" watchObservedRunningTime="2026-01-30 16:23:41.12575082 +0000 UTC m=+1669.762813419" Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.154902 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9g9ww" podStartSLOduration=2.154867553 podStartE2EDuration="2.154867553s" podCreationTimestamp="2026-01-30 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:41.143905711 +0000 UTC m=+1669.780968310" watchObservedRunningTime="2026-01-30 16:23:41.154867553 +0000 UTC m=+1669.791930152" Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.570651 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.678081 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:23:41 crc kubenswrapper[4740]: I0130 16:23:41.679262 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="dnsmasq-dns" containerID="cri-o://0a389692e8eb7f0fd8c33bcc7f0b80044c3c11f5d34eca4fb5aa12a52f6deb3e" gracePeriod=10 Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.122868 4740 generic.go:334] "Generic (PLEG): container finished" podID="c324fe3d-c078-427f-9c52-b99f1008f395" containerID="0a389692e8eb7f0fd8c33bcc7f0b80044c3c11f5d34eca4fb5aa12a52f6deb3e" exitCode=0 Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.123079 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" event={"ID":"c324fe3d-c078-427f-9c52-b99f1008f395","Type":"ContainerDied","Data":"0a389692e8eb7f0fd8c33bcc7f0b80044c3c11f5d34eca4fb5aa12a52f6deb3e"} Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.366512 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.494935 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.495169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.495336 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25ct\" (UniqueName: \"kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.495480 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.495506 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.495574 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc\") pod \"c324fe3d-c078-427f-9c52-b99f1008f395\" (UID: \"c324fe3d-c078-427f-9c52-b99f1008f395\") " Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.503575 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct" (OuterVolumeSpecName: "kube-api-access-k25ct") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "kube-api-access-k25ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.559196 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.560945 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.562658 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.564271 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.572784 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config" (OuterVolumeSpecName: "config") pod "c324fe3d-c078-427f-9c52-b99f1008f395" (UID: "c324fe3d-c078-427f-9c52-b99f1008f395"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603448 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603500 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603513 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603526 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603542 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c324fe3d-c078-427f-9c52-b99f1008f395-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:42 crc kubenswrapper[4740]: I0130 16:23:42.603561 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25ct\" (UniqueName: \"kubernetes.io/projected/c324fe3d-c078-427f-9c52-b99f1008f395-kube-api-access-k25ct\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.137223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" event={"ID":"c324fe3d-c078-427f-9c52-b99f1008f395","Type":"ContainerDied","Data":"5f2cbd5e6ba8d75d633b6d3d421890d2156772de2bafb95576cd67baebf0ad7a"} Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.137295 4740 scope.go:117] "RemoveContainer" containerID="0a389692e8eb7f0fd8c33bcc7f0b80044c3c11f5d34eca4fb5aa12a52f6deb3e" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.137582 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-4r9vp" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.173475 4740 scope.go:117] "RemoveContainer" containerID="402635d37ade2058deb5d4897d4475adbcf6c85020548db4f63684e4130cd355" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.209570 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.228927 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-4r9vp"] Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.358920 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" path="/var/lib/kubelet/pods/c324fe3d-c078-427f-9c52-b99f1008f395/volumes" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.816891 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947038 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947145 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947193 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxfkg\" (UniqueName: \"kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947277 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947447 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947608 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947656 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml\") pod \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\" (UID: \"a66ba5b7-134a-4917-9426-fdad3dfe7dfc\") " Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947794 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.947915 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.948298 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.948315 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.967603 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg" (OuterVolumeSpecName: "kube-api-access-vxfkg") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "kube-api-access-vxfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:43 crc kubenswrapper[4740]: I0130 16:23:43.971553 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts" (OuterVolumeSpecName: "scripts") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.001332 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.019131 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.051023 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.051065 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.051083 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.051095 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxfkg\" (UniqueName: \"kubernetes.io/projected/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-kube-api-access-vxfkg\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.054135 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.084085 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data" (OuterVolumeSpecName: "config-data") pod "a66ba5b7-134a-4917-9426-fdad3dfe7dfc" (UID: "a66ba5b7-134a-4917-9426-fdad3dfe7dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.153547 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.153600 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a66ba5b7-134a-4917-9426-fdad3dfe7dfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.160184 4740 generic.go:334] "Generic (PLEG): container finished" podID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerID="1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2" exitCode=0 Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.160262 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerDied","Data":"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2"} Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.160312 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.160343 4740 scope.go:117] "RemoveContainer" containerID="bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.160325 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a66ba5b7-134a-4917-9426-fdad3dfe7dfc","Type":"ContainerDied","Data":"0a1d1ac6f01db825c6f5adb5d98e9a899900a7418ef3c1366e282ca335bb6109"} Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.220254 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.221947 4740 scope.go:117] "RemoveContainer" containerID="fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.231971 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252068 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252715 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="init" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252739 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="init" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252759 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="sg-core" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252766 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="sg-core" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252778 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-notification-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252784 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-notification-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252801 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="proxy-httpd" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252807 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="proxy-httpd" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252831 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="dnsmasq-dns" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252837 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="dnsmasq-dns" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.252864 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-central-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.252876 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-central-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.253114 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="proxy-httpd" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.253129 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c324fe3d-c078-427f-9c52-b99f1008f395" containerName="dnsmasq-dns" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.253151 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="sg-core" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.253162 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-central-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.253173 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" containerName="ceilometer-notification-agent" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.256016 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.260916 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.261418 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.261640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.273654 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.302862 4740 scope.go:117] "RemoveContainer" containerID="1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.334269 4740 scope.go:117] "RemoveContainer" containerID="15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359208 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359270 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hqh\" (UniqueName: \"kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359301 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359390 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359468 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359517 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359558 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.359660 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.361598 4740 scope.go:117] "RemoveContainer" containerID="bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.362116 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2\": container with ID starting with bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2 not found: ID does not exist" containerID="bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.362289 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2"} err="failed to get container status \"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2\": rpc error: code = NotFound desc = could not find container \"bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2\": container with ID starting with bb3d08bac53cd42bd85711ae17db1f72b7bad9512c04878930a04fa02ad329c2 not found: ID does not exist" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.362404 4740 scope.go:117] "RemoveContainer" containerID="fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.363087 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297\": container with ID starting with fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297 not found: ID does not exist" containerID="fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.363111 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297"} err="failed to get container status \"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297\": rpc error: code = NotFound desc = could not find container \"fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297\": container with ID starting with fdbecb1eee343921d2fe93fb2124f87b6390e6d4cd9b872f1b2f7819aa3b9297 not found: ID does not exist" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.363124 4740 scope.go:117] "RemoveContainer" containerID="1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.364769 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2\": container with ID starting with 1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2 not found: ID does not exist" containerID="1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.365526 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2"} err="failed to get container status \"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2\": rpc error: code = NotFound desc = could not find container \"1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2\": container with ID starting with 1e8c00bc70a2304f5847b414320f46f37c1d87c1f51c443f98e6fe06061436a2 not found: ID does not exist" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.365791 4740 scope.go:117] "RemoveContainer" containerID="15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4" Jan 30 16:23:44 crc kubenswrapper[4740]: E0130 16:23:44.368526 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4\": container with ID starting with 15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4 not found: ID does not exist" containerID="15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.368595 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4"} err="failed to get container status \"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4\": rpc error: code = NotFound desc = could not find container \"15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4\": container with ID starting with 15867da30613372a077105a3aea7a6160bffdaeae636951a7413246db753a5c4 not found: ID does not exist" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.463384 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.463793 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.463979 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464085 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464206 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464322 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464419 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hqh\" (UniqueName: \"kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464572 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.464697 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.469171 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.469925 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.471999 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.472509 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.481989 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.484410 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hqh\" (UniqueName: \"kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh\") pod \"ceilometer-0\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " pod="openstack/ceilometer-0" Jan 30 16:23:44 crc kubenswrapper[4740]: I0130 16:23:44.599100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:23:45 crc kubenswrapper[4740]: I0130 16:23:45.099677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:23:45 crc kubenswrapper[4740]: I0130 16:23:45.174120 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerStarted","Data":"5277b4e82603d45e00beaf93c57d203aceb833eae7fdce252b79fa607e583bb3"} Jan 30 16:23:45 crc kubenswrapper[4740]: I0130 16:23:45.355938 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66ba5b7-134a-4917-9426-fdad3dfe7dfc" path="/var/lib/kubelet/pods/a66ba5b7-134a-4917-9426-fdad3dfe7dfc/volumes" Jan 30 16:23:46 crc kubenswrapper[4740]: I0130 16:23:46.190764 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerStarted","Data":"7a851983110fc9fcf2276523f647775a334c3e02b1d99c4602e5eb175b64f243"} Jan 30 16:23:47 crc kubenswrapper[4740]: I0130 16:23:47.207920 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerStarted","Data":"d1cb4b028f70f624ddecec8ad53cec657e59d675e156b3f5dc2eb2f73416dede"} Jan 30 16:23:47 crc kubenswrapper[4740]: I0130 16:23:47.210878 4740 generic.go:334] "Generic (PLEG): container finished" podID="f0da4cd8-4245-4e9d-bafb-5054e6ed647c" containerID="1b8bd308acaad5bda22ba9cdf85eea3009e8f5b0a835b284c23faf9856264ed4" exitCode=0 Jan 30 16:23:47 crc kubenswrapper[4740]: I0130 16:23:47.210931 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9g9ww" event={"ID":"f0da4cd8-4245-4e9d-bafb-5054e6ed647c","Type":"ContainerDied","Data":"1b8bd308acaad5bda22ba9cdf85eea3009e8f5b0a835b284c23faf9856264ed4"} Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.226067 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerStarted","Data":"2940ccc79857644c685aa97e8a272ba0ffb34636a71a14fac943893790cdfc01"} Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.264592 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.266027 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.271695 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.505573 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.508516 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.785075 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.882787 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle\") pod \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.882896 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data\") pod \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.882949 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxtf\" (UniqueName: \"kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf\") pod \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.883210 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts\") pod \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\" (UID: \"f0da4cd8-4245-4e9d-bafb-5054e6ed647c\") " Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.892688 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts" (OuterVolumeSpecName: "scripts") pod "f0da4cd8-4245-4e9d-bafb-5054e6ed647c" (UID: "f0da4cd8-4245-4e9d-bafb-5054e6ed647c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.894536 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf" (OuterVolumeSpecName: "kube-api-access-sjxtf") pod "f0da4cd8-4245-4e9d-bafb-5054e6ed647c" (UID: "f0da4cd8-4245-4e9d-bafb-5054e6ed647c"). InnerVolumeSpecName "kube-api-access-sjxtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.936618 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0da4cd8-4245-4e9d-bafb-5054e6ed647c" (UID: "f0da4cd8-4245-4e9d-bafb-5054e6ed647c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.937057 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data" (OuterVolumeSpecName: "config-data") pod "f0da4cd8-4245-4e9d-bafb-5054e6ed647c" (UID: "f0da4cd8-4245-4e9d-bafb-5054e6ed647c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.986732 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.986774 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.986788 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxtf\" (UniqueName: \"kubernetes.io/projected/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-kube-api-access-sjxtf\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:48 crc kubenswrapper[4740]: I0130 16:23:48.986800 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0da4cd8-4245-4e9d-bafb-5054e6ed647c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.248659 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9g9ww" event={"ID":"f0da4cd8-4245-4e9d-bafb-5054e6ed647c","Type":"ContainerDied","Data":"02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe"} Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.248722 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c8c201054d3cfb6580bc8da84916c3d43fc6ad49560404f0c9b8484fbc3abe" Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.248687 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9g9ww" Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.265170 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.446659 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.487470 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.488030 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" containerName="nova-scheduler-scheduler" containerID="cri-o://c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20" gracePeriod=30 Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.505157 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.523965 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:49 crc kubenswrapper[4740]: I0130 16:23:49.524079 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:23:50 crc kubenswrapper[4740]: I0130 16:23:50.260519 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-log" containerID="cri-o://420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9" gracePeriod=30 Jan 30 16:23:50 crc kubenswrapper[4740]: I0130 16:23:50.260582 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-api" containerID="cri-o://7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305" gracePeriod=30 Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.133912 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.263782 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data\") pod \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.263927 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle\") pod \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.263984 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2fb\" (UniqueName: \"kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb\") pod \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\" (UID: \"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc\") " Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.292817 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb" (OuterVolumeSpecName: "kube-api-access-5r2fb") pod "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" (UID: "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc"). InnerVolumeSpecName "kube-api-access-5r2fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.333812 4740 generic.go:334] "Generic (PLEG): container finished" podID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" containerID="c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20" exitCode=0 Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.333994 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc","Type":"ContainerDied","Data":"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20"} Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.334041 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc","Type":"ContainerDied","Data":"fbe3c5bce19bf95cafd44844885bbb6f6616b38a9bc9e08e194a55b016857214"} Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.334064 4740 scope.go:117] "RemoveContainer" containerID="c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.334293 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.342627 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:23:51 crc kubenswrapper[4740]: E0130 16:23:51.342929 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.372499 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2fb\" (UniqueName: \"kubernetes.io/projected/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-kube-api-access-5r2fb\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.397534 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" (UID: "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.416637 4740 scope.go:117] "RemoveContainer" containerID="c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20" Jan 30 16:23:51 crc kubenswrapper[4740]: E0130 16:23:51.424416 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20\": container with ID starting with c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20 not found: ID does not exist" containerID="c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.424492 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20"} err="failed to get container status \"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20\": rpc error: code = NotFound desc = could not find container \"c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20\": container with ID starting with c544a8b9580a9851e7d862e3d295734b6eb41a89fcf5d2d796e116e2f57b6b20 not found: ID does not exist" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.446810 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data" (OuterVolumeSpecName: "config-data") pod "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" (UID: "b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.448966 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.012401247 podStartE2EDuration="7.448938772s" podCreationTimestamp="2026-01-30 16:23:44 +0000 UTC" firstStartedPulling="2026-01-30 16:23:45.111246346 +0000 UTC m=+1673.748308945" lastFinishedPulling="2026-01-30 16:23:50.547783871 +0000 UTC m=+1679.184846470" observedRunningTime="2026-01-30 16:23:51.424058444 +0000 UTC m=+1680.061121043" watchObservedRunningTime="2026-01-30 16:23:51.448938772 +0000 UTC m=+1680.086001371" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.456719 4740 generic.go:334] "Generic (PLEG): container finished" podID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerID="420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9" exitCode=143 Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.457003 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" containerID="cri-o://ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb" gracePeriod=30 Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.458106 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" containerID="cri-o://9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed" gracePeriod=30 Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.474254 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.474305 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.510863 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.510915 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerStarted","Data":"3158d9cb2aa79d74dbdd6d888214270f15b9c00e202ce2738678f8a6631f5bf4"} Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.510943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerDied","Data":"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9"} Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.684455 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.703690 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.721046 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:51 crc kubenswrapper[4740]: E0130 16:23:51.721911 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" containerName="nova-scheduler-scheduler" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.722017 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" containerName="nova-scheduler-scheduler" Jan 30 16:23:51 crc kubenswrapper[4740]: E0130 16:23:51.722107 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0da4cd8-4245-4e9d-bafb-5054e6ed647c" containerName="nova-manage" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.722179 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0da4cd8-4245-4e9d-bafb-5054e6ed647c" containerName="nova-manage" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.722498 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" containerName="nova-scheduler-scheduler" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.722598 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0da4cd8-4245-4e9d-bafb-5054e6ed647c" containerName="nova-manage" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.723632 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.726741 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.738055 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.783155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-config-data\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.783458 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.783594 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsfp\" (UniqueName: \"kubernetes.io/projected/98ae8da8-b7e7-40ca-8116-a91dc003a22c-kube-api-access-kwsfp\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.885864 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsfp\" (UniqueName: \"kubernetes.io/projected/98ae8da8-b7e7-40ca-8116-a91dc003a22c-kube-api-access-kwsfp\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.886024 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-config-data\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.886264 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.892546 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-config-data\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.901434 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae8da8-b7e7-40ca-8116-a91dc003a22c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:51 crc kubenswrapper[4740]: I0130 16:23:51.921844 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsfp\" (UniqueName: \"kubernetes.io/projected/98ae8da8-b7e7-40ca-8116-a91dc003a22c-kube-api-access-kwsfp\") pod \"nova-scheduler-0\" (UID: \"98ae8da8-b7e7-40ca-8116-a91dc003a22c\") " pod="openstack/nova-scheduler-0" Jan 30 16:23:52 crc kubenswrapper[4740]: I0130 16:23:52.049153 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 16:23:52 crc kubenswrapper[4740]: I0130 16:23:52.485813 4740 generic.go:334] "Generic (PLEG): container finished" podID="35899e71-0177-4329-9152-85c03acb6e32" containerID="ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb" exitCode=143 Jan 30 16:23:52 crc kubenswrapper[4740]: I0130 16:23:52.486574 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerDied","Data":"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb"} Jan 30 16:23:52 crc kubenswrapper[4740]: I0130 16:23:52.618539 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 16:23:53 crc kubenswrapper[4740]: I0130 16:23:53.349865 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc" path="/var/lib/kubelet/pods/b4b0f5f4-cc32-4dd6-84d8-5eb49153d8fc/volumes" Jan 30 16:23:53 crc kubenswrapper[4740]: I0130 16:23:53.497154 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98ae8da8-b7e7-40ca-8116-a91dc003a22c","Type":"ContainerStarted","Data":"cb043ea214381ffe0b504ba94f4f22b3f5d44abeb0304c4a74f98f5dd8772a58"} Jan 30 16:23:53 crc kubenswrapper[4740]: I0130 16:23:53.497220 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98ae8da8-b7e7-40ca-8116-a91dc003a22c","Type":"ContainerStarted","Data":"fdaae862c7498f00500614492f0dbc1f01eb92446ee3ce3c909ce20da04f3c98"} Jan 30 16:23:53 crc kubenswrapper[4740]: I0130 16:23:53.522874 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.522844461 podStartE2EDuration="2.522844461s" podCreationTimestamp="2026-01-30 16:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:53.511315185 +0000 UTC m=+1682.148377784" watchObservedRunningTime="2026-01-30 16:23:53.522844461 +0000 UTC m=+1682.159907060" Jan 30 16:23:54 crc kubenswrapper[4740]: I0130 16:23:54.601136 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": read tcp 10.217.0.2:57982->10.217.0.228:8775: read: connection reset by peer" Jan 30 16:23:54 crc kubenswrapper[4740]: I0130 16:23:54.601154 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": read tcp 10.217.0.2:57988->10.217.0.228:8775: read: connection reset by peer" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.221515 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.299781 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25s86\" (UniqueName: \"kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86\") pod \"35899e71-0177-4329-9152-85c03acb6e32\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.299881 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs\") pod \"35899e71-0177-4329-9152-85c03acb6e32\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.300029 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle\") pod \"35899e71-0177-4329-9152-85c03acb6e32\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.300062 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs\") pod \"35899e71-0177-4329-9152-85c03acb6e32\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.300156 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data\") pod \"35899e71-0177-4329-9152-85c03acb6e32\" (UID: \"35899e71-0177-4329-9152-85c03acb6e32\") " Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.302091 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs" (OuterVolumeSpecName: "logs") pod "35899e71-0177-4329-9152-85c03acb6e32" (UID: "35899e71-0177-4329-9152-85c03acb6e32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.308540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86" (OuterVolumeSpecName: "kube-api-access-25s86") pod "35899e71-0177-4329-9152-85c03acb6e32" (UID: "35899e71-0177-4329-9152-85c03acb6e32"). InnerVolumeSpecName "kube-api-access-25s86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.362177 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data" (OuterVolumeSpecName: "config-data") pod "35899e71-0177-4329-9152-85c03acb6e32" (UID: "35899e71-0177-4329-9152-85c03acb6e32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.384502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35899e71-0177-4329-9152-85c03acb6e32" (UID: "35899e71-0177-4329-9152-85c03acb6e32"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.388020 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35899e71-0177-4329-9152-85c03acb6e32" (UID: "35899e71-0177-4329-9152-85c03acb6e32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.403725 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25s86\" (UniqueName: \"kubernetes.io/projected/35899e71-0177-4329-9152-85c03acb6e32-kube-api-access-25s86\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.403764 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.403776 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.403785 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35899e71-0177-4329-9152-85c03acb6e32-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.403793 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35899e71-0177-4329-9152-85c03acb6e32-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.525404 4740 generic.go:334] "Generic (PLEG): container finished" podID="35899e71-0177-4329-9152-85c03acb6e32" containerID="9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed" exitCode=0 Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.525466 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerDied","Data":"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed"} Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.525506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35899e71-0177-4329-9152-85c03acb6e32","Type":"ContainerDied","Data":"3da4c7f1712190c690028ea57bac904273dd9178fccb8732342c0a1580d240d6"} Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.525527 4740 scope.go:117] "RemoveContainer" containerID="9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.525710 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.577714 4740 scope.go:117] "RemoveContainer" containerID="ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.590142 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.608418 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.615765 4740 scope.go:117] "RemoveContainer" containerID="9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed" Jan 30 16:23:55 crc kubenswrapper[4740]: E0130 16:23:55.628628 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed\": container with ID starting with 9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed not found: ID does not exist" containerID="9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.628704 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed"} err="failed to get container status \"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed\": rpc error: code = NotFound desc = could not find container \"9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed\": container with ID starting with 9cfaae05a86a2c57a2fab500e768ddd69cd72099ae779984b4676a32e762abed not found: ID does not exist" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.628749 4740 scope.go:117] "RemoveContainer" containerID="ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb" Jan 30 16:23:55 crc kubenswrapper[4740]: E0130 16:23:55.631876 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb\": container with ID starting with ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb not found: ID does not exist" containerID="ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.631937 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb"} err="failed to get container status \"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb\": rpc error: code = NotFound desc = could not find container \"ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb\": container with ID starting with ee6342065b4184bb60c32c804cd2657bf4f8a94214c68313f2bc8e75c68a89eb not found: ID does not exist" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.656788 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:55 crc kubenswrapper[4740]: E0130 16:23:55.658042 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.658071 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" Jan 30 16:23:55 crc kubenswrapper[4740]: E0130 16:23:55.658095 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.658103 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.660185 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-log" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.660279 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="35899e71-0177-4329-9152-85c03acb6e32" containerName="nova-metadata-metadata" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.662921 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.666567 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.667252 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.684312 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.722875 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.722981 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.723008 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f00ea-3a1e-4684-ad0f-26180738550d-logs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.723025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjhk\" (UniqueName: \"kubernetes.io/projected/2e1f00ea-3a1e-4684-ad0f-26180738550d-kube-api-access-7qjhk\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.723045 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-config-data\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825036 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825091 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f00ea-3a1e-4684-ad0f-26180738550d-logs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825116 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qjhk\" (UniqueName: \"kubernetes.io/projected/2e1f00ea-3a1e-4684-ad0f-26180738550d-kube-api-access-7qjhk\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825155 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-config-data\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.825693 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1f00ea-3a1e-4684-ad0f-26180738550d-logs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.829003 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.829080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.829732 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1f00ea-3a1e-4684-ad0f-26180738550d-config-data\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:55 crc kubenswrapper[4740]: I0130 16:23:55.843068 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qjhk\" (UniqueName: \"kubernetes.io/projected/2e1f00ea-3a1e-4684-ad0f-26180738550d-kube-api-access-7qjhk\") pod \"nova-metadata-0\" (UID: \"2e1f00ea-3a1e-4684-ad0f-26180738550d\") " pod="openstack/nova-metadata-0" Jan 30 16:23:56 crc kubenswrapper[4740]: I0130 16:23:56.050330 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 16:23:56 crc kubenswrapper[4740]: I0130 16:23:56.545260 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 16:23:56 crc kubenswrapper[4740]: W0130 16:23:56.547594 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1f00ea_3a1e_4684_ad0f_26180738550d.slice/crio-0649f46c22d26a3cca288bc0802db1bc47b8b7a94edf0671732ed37b4c2dd08d WatchSource:0}: Error finding container 0649f46c22d26a3cca288bc0802db1bc47b8b7a94edf0671732ed37b4c2dd08d: Status 404 returned error can't find the container with id 0649f46c22d26a3cca288bc0802db1bc47b8b7a94edf0671732ed37b4c2dd08d Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.050593 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.102812 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.164963 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.165056 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.165213 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.165258 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.165315 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.165398 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767xq\" (UniqueName: \"kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq\") pod \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\" (UID: \"a1dda2e1-4965-4204-beb0-692bd27bc2a7\") " Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.166034 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs" (OuterVolumeSpecName: "logs") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.172238 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq" (OuterVolumeSpecName: "kube-api-access-767xq") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "kube-api-access-767xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.200150 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.200611 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data" (OuterVolumeSpecName: "config-data") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.230940 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.241760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1dda2e1-4965-4204-beb0-692bd27bc2a7" (UID: "a1dda2e1-4965-4204-beb0-692bd27bc2a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.268648 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.268897 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767xq\" (UniqueName: \"kubernetes.io/projected/a1dda2e1-4965-4204-beb0-692bd27bc2a7-kube-api-access-767xq\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.268957 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.269048 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.269111 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1dda2e1-4965-4204-beb0-692bd27bc2a7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.269165 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1dda2e1-4965-4204-beb0-692bd27bc2a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.348951 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35899e71-0177-4329-9152-85c03acb6e32" path="/var/lib/kubelet/pods/35899e71-0177-4329-9152-85c03acb6e32/volumes" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.555445 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e1f00ea-3a1e-4684-ad0f-26180738550d","Type":"ContainerStarted","Data":"dcc549fa64c631b4985e5dcf6a443e6cfadeecf0dc7aee43f7df5406443dee9c"} Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.555959 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e1f00ea-3a1e-4684-ad0f-26180738550d","Type":"ContainerStarted","Data":"3b64d2287850905936f0388a124b75b819400fbe419833e5ff86d2b2c97ef460"} Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.555976 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e1f00ea-3a1e-4684-ad0f-26180738550d","Type":"ContainerStarted","Data":"0649f46c22d26a3cca288bc0802db1bc47b8b7a94edf0671732ed37b4c2dd08d"} Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.557513 4740 generic.go:334] "Generic (PLEG): container finished" podID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerID="7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305" exitCode=0 Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.557558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerDied","Data":"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305"} Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.557582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1dda2e1-4965-4204-beb0-692bd27bc2a7","Type":"ContainerDied","Data":"c1cbafa83e2be52db201622dafe2271176fc2fc95f0ffb61fda9e60f721fa0eb"} Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.557609 4740 scope.go:117] "RemoveContainer" containerID="7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.557621 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.575220 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.575194918 podStartE2EDuration="2.575194918s" podCreationTimestamp="2026-01-30 16:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:57.573078335 +0000 UTC m=+1686.210140924" watchObservedRunningTime="2026-01-30 16:23:57.575194918 +0000 UTC m=+1686.212257517" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.590219 4740 scope.go:117] "RemoveContainer" containerID="420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.607784 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.622176 4740 scope.go:117] "RemoveContainer" containerID="7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305" Jan 30 16:23:57 crc kubenswrapper[4740]: E0130 16:23:57.622787 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305\": container with ID starting with 7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305 not found: ID does not exist" containerID="7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.622851 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305"} err="failed to get container status \"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305\": rpc error: code = NotFound desc = could not find container \"7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305\": container with ID starting with 7623df670e228abe639592b88199247a7e2ecf35ce9361017340b38f20c98305 not found: ID does not exist" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.622897 4740 scope.go:117] "RemoveContainer" containerID="420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9" Jan 30 16:23:57 crc kubenswrapper[4740]: E0130 16:23:57.623232 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9\": container with ID starting with 420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9 not found: ID does not exist" containerID="420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.623270 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9"} err="failed to get container status \"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9\": rpc error: code = NotFound desc = could not find container \"420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9\": container with ID starting with 420d2d8647d806ba337c85f2286ae7b3ad222d9746e93301c5607af6902996c9 not found: ID does not exist" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.625002 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.638285 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:57 crc kubenswrapper[4740]: E0130 16:23:57.638930 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-api" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.638951 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-api" Jan 30 16:23:57 crc kubenswrapper[4740]: E0130 16:23:57.638980 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-log" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.638988 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-log" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.639209 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-api" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.639228 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" containerName="nova-api-log" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.640820 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.644362 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.645798 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.645941 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.652936 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.682046 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjdv\" (UniqueName: \"kubernetes.io/projected/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-kube-api-access-njjdv\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.682106 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.682135 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-logs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.682726 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.683668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-config-data\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.683690 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786780 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-config-data\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786810 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786877 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjdv\" (UniqueName: \"kubernetes.io/projected/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-kube-api-access-njjdv\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786915 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.786938 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-logs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.787746 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-logs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.794641 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.794733 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.794973 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-config-data\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.797818 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.806245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjdv\" (UniqueName: \"kubernetes.io/projected/2be29a5b-4407-4ef0-bf73-538f62c7ae2e-kube-api-access-njjdv\") pod \"nova-api-0\" (UID: \"2be29a5b-4407-4ef0-bf73-538f62c7ae2e\") " pod="openstack/nova-api-0" Jan 30 16:23:57 crc kubenswrapper[4740]: I0130 16:23:57.991102 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 16:23:58 crc kubenswrapper[4740]: W0130 16:23:58.513697 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2be29a5b_4407_4ef0_bf73_538f62c7ae2e.slice/crio-8846edfb23b1bad33912f6681ede800c39879cccb16281056f294b2dbce77b0e WatchSource:0}: Error finding container 8846edfb23b1bad33912f6681ede800c39879cccb16281056f294b2dbce77b0e: Status 404 returned error can't find the container with id 8846edfb23b1bad33912f6681ede800c39879cccb16281056f294b2dbce77b0e Jan 30 16:23:58 crc kubenswrapper[4740]: I0130 16:23:58.514968 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 16:23:58 crc kubenswrapper[4740]: I0130 16:23:58.575518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2be29a5b-4407-4ef0-bf73-538f62c7ae2e","Type":"ContainerStarted","Data":"8846edfb23b1bad33912f6681ede800c39879cccb16281056f294b2dbce77b0e"} Jan 30 16:23:59 crc kubenswrapper[4740]: I0130 16:23:59.350242 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1dda2e1-4965-4204-beb0-692bd27bc2a7" path="/var/lib/kubelet/pods/a1dda2e1-4965-4204-beb0-692bd27bc2a7/volumes" Jan 30 16:23:59 crc kubenswrapper[4740]: I0130 16:23:59.589583 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2be29a5b-4407-4ef0-bf73-538f62c7ae2e","Type":"ContainerStarted","Data":"ddbe4014f9e5463bd6b940e74a96f67de3b9e38fb5ddd76bf0b2885427f2df4b"} Jan 30 16:23:59 crc kubenswrapper[4740]: I0130 16:23:59.589643 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2be29a5b-4407-4ef0-bf73-538f62c7ae2e","Type":"ContainerStarted","Data":"54b14e9de7fa3fa3260d726c84caacc9325a238d2b3e0216c3c6146450126aa8"} Jan 30 16:23:59 crc kubenswrapper[4740]: I0130 16:23:59.616613 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.616575498 podStartE2EDuration="2.616575498s" podCreationTimestamp="2026-01-30 16:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:23:59.607956564 +0000 UTC m=+1688.245019183" watchObservedRunningTime="2026-01-30 16:23:59.616575498 +0000 UTC m=+1688.253638097" Jan 30 16:24:01 crc kubenswrapper[4740]: I0130 16:24:01.050870 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:24:01 crc kubenswrapper[4740]: I0130 16:24:01.052410 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 16:24:02 crc kubenswrapper[4740]: I0130 16:24:02.050400 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 16:24:02 crc kubenswrapper[4740]: I0130 16:24:02.094618 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 16:24:02 crc kubenswrapper[4740]: I0130 16:24:02.670575 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 16:24:05 crc kubenswrapper[4740]: I0130 16:24:05.335289 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:24:05 crc kubenswrapper[4740]: E0130 16:24:05.335968 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:24:06 crc kubenswrapper[4740]: I0130 16:24:06.051314 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:24:06 crc kubenswrapper[4740]: I0130 16:24:06.051886 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 16:24:07 crc kubenswrapper[4740]: I0130 16:24:07.071639 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e1f00ea-3a1e-4684-ad0f-26180738550d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:24:07 crc kubenswrapper[4740]: I0130 16:24:07.071652 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e1f00ea-3a1e-4684-ad0f-26180738550d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.234:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:24:07 crc kubenswrapper[4740]: I0130 16:24:07.991830 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:24:07 crc kubenswrapper[4740]: I0130 16:24:07.992305 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 16:24:08 crc kubenswrapper[4740]: I0130 16:24:08.997682 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2be29a5b-4407-4ef0-bf73-538f62c7ae2e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:24:09 crc kubenswrapper[4740]: I0130 16:24:09.003625 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2be29a5b-4407-4ef0-bf73-538f62c7ae2e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 16:24:14 crc kubenswrapper[4740]: I0130 16:24:14.609850 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:24:16 crc kubenswrapper[4740]: I0130 16:24:16.058758 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:24:16 crc kubenswrapper[4740]: I0130 16:24:16.063950 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 16:24:16 crc kubenswrapper[4740]: I0130 16:24:16.064881 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:24:16 crc kubenswrapper[4740]: I0130 16:24:16.336506 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:24:16 crc kubenswrapper[4740]: E0130 16:24:16.336894 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:24:16 crc kubenswrapper[4740]: I0130 16:24:16.806644 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.000188 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.001640 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.003365 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.014706 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.818646 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 16:24:18 crc kubenswrapper[4740]: I0130 16:24:18.826116 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 16:24:28 crc kubenswrapper[4740]: I0130 16:24:28.973488 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-lfz95"] Jan 30 16:24:28 crc kubenswrapper[4740]: I0130 16:24:28.984571 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-lfz95"] Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.075707 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-6mkbr"] Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.078120 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.081503 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.106319 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6mkbr"] Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.242856 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.242916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.242952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.243071 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.243096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6vv\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.345601 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.345658 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6vv\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.345788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.345810 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.345847 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.350220 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29755348-1e90-4436-8a60-a2823c2804fd" path="/var/lib/kubelet/pods/29755348-1e90-4436-8a60-a2823c2804fd/volumes" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.354265 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.354274 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.356627 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.356789 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.370960 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6vv\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv\") pod \"cloudkitty-db-sync-6mkbr\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.400894 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.942274 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.946155 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6mkbr"] Jan 30 16:24:29 crc kubenswrapper[4740]: I0130 16:24:29.978608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6mkbr" event={"ID":"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a","Type":"ContainerStarted","Data":"1708162fbffe0fa30b28bcec7d3a8956352d1a1030b13bf1d0d98d27329dfbb7"} Jan 30 16:24:30 crc kubenswrapper[4740]: I0130 16:24:30.756392 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:30 crc kubenswrapper[4740]: I0130 16:24:30.995016 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6mkbr" event={"ID":"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a","Type":"ContainerStarted","Data":"f5591d7dc942b0127629f7755fd469f1481988fd6c775b26634319207dcebda7"} Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.016432 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-6mkbr" podStartSLOduration=1.809130456 podStartE2EDuration="2.016399334s" podCreationTimestamp="2026-01-30 16:24:29 +0000 UTC" firstStartedPulling="2026-01-30 16:24:29.94154541 +0000 UTC m=+1718.578608009" lastFinishedPulling="2026-01-30 16:24:30.148814288 +0000 UTC m=+1718.785876887" observedRunningTime="2026-01-30 16:24:31.011923103 +0000 UTC m=+1719.648985702" watchObservedRunningTime="2026-01-30 16:24:31.016399334 +0000 UTC m=+1719.653461933" Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.053563 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.054003 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-central-agent" containerID="cri-o://7a851983110fc9fcf2276523f647775a334c3e02b1d99c4602e5eb175b64f243" gracePeriod=30 Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.054202 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="proxy-httpd" containerID="cri-o://3158d9cb2aa79d74dbdd6d888214270f15b9c00e202ce2738678f8a6631f5bf4" gracePeriod=30 Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.054254 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="sg-core" containerID="cri-o://2940ccc79857644c685aa97e8a272ba0ffb34636a71a14fac943893790cdfc01" gracePeriod=30 Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.054317 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-notification-agent" containerID="cri-o://d1cb4b028f70f624ddecec8ad53cec657e59d675e156b3f5dc2eb2f73416dede" gracePeriod=30 Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.336103 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:24:31 crc kubenswrapper[4740]: E0130 16:24:31.336879 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.849623 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:31 crc kubenswrapper[4740]: I0130 16:24:31.975261 4740 scope.go:117] "RemoveContainer" containerID="07f2c899e94841a9469b670a950d98f5fdd1ecf52f9bb42ea3bf534141dc91b2" Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.018120 4740 generic.go:334] "Generic (PLEG): container finished" podID="5c1941da-08f1-48e5-afcd-99d376601f66" containerID="3158d9cb2aa79d74dbdd6d888214270f15b9c00e202ce2738678f8a6631f5bf4" exitCode=0 Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.018161 4740 generic.go:334] "Generic (PLEG): container finished" podID="5c1941da-08f1-48e5-afcd-99d376601f66" containerID="2940ccc79857644c685aa97e8a272ba0ffb34636a71a14fac943893790cdfc01" exitCode=2 Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.018171 4740 generic.go:334] "Generic (PLEG): container finished" podID="5c1941da-08f1-48e5-afcd-99d376601f66" containerID="7a851983110fc9fcf2276523f647775a334c3e02b1d99c4602e5eb175b64f243" exitCode=0 Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.018973 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerDied","Data":"3158d9cb2aa79d74dbdd6d888214270f15b9c00e202ce2738678f8a6631f5bf4"} Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.019013 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerDied","Data":"2940ccc79857644c685aa97e8a272ba0ffb34636a71a14fac943893790cdfc01"} Jan 30 16:24:32 crc kubenswrapper[4740]: I0130 16:24:32.019030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerDied","Data":"7a851983110fc9fcf2276523f647775a334c3e02b1d99c4602e5eb175b64f243"} Jan 30 16:24:35 crc kubenswrapper[4740]: I0130 16:24:35.054261 4740 generic.go:334] "Generic (PLEG): container finished" podID="c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" containerID="f5591d7dc942b0127629f7755fd469f1481988fd6c775b26634319207dcebda7" exitCode=0 Jan 30 16:24:35 crc kubenswrapper[4740]: I0130 16:24:35.054332 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6mkbr" event={"ID":"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a","Type":"ContainerDied","Data":"f5591d7dc942b0127629f7755fd469f1481988fd6c775b26634319207dcebda7"} Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.072757 4740 generic.go:334] "Generic (PLEG): container finished" podID="5c1941da-08f1-48e5-afcd-99d376601f66" containerID="d1cb4b028f70f624ddecec8ad53cec657e59d675e156b3f5dc2eb2f73416dede" exitCode=0 Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.072880 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerDied","Data":"d1cb4b028f70f624ddecec8ad53cec657e59d675e156b3f5dc2eb2f73416dede"} Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.732212 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.738833 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861333 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861570 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6vv\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv\") pod \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861653 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs\") pod \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861732 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle\") pod \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861846 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861898 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts\") pod \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.861974 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862020 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data\") pod \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\" (UID: \"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862051 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862110 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hqh\" (UniqueName: \"kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862152 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data\") pod \"5c1941da-08f1-48e5-afcd-99d376601f66\" (UID: \"5c1941da-08f1-48e5-afcd-99d376601f66\") " Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862446 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862742 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.862808 4740 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.871212 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv" (OuterVolumeSpecName: "kube-api-access-nq6vv") pod "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" (UID: "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a"). InnerVolumeSpecName "kube-api-access-nq6vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.872803 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="rabbitmq" containerID="cri-o://f5726cc466db1ba55b4c5679b88c91afdce24f273892f7a01b8a4fe90f232d59" gracePeriod=604794 Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.874038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs" (OuterVolumeSpecName: "certs") pod "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" (UID: "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.874067 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts" (OuterVolumeSpecName: "scripts") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.874872 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh" (OuterVolumeSpecName: "kube-api-access-v5hqh") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "kube-api-access-v5hqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.900673 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts" (OuterVolumeSpecName: "scripts") pod "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" (UID: "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.924761 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data" (OuterVolumeSpecName: "config-data") pod "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" (UID: "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.962542 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.963034 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" (UID: "c7949eee-0a06-4fe5-9cfc-b08bfdecc24a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.965725 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.965858 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hqh\" (UniqueName: \"kubernetes.io/projected/5c1941da-08f1-48e5-afcd-99d376601f66-kube-api-access-v5hqh\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966272 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966375 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6vv\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-kube-api-access-nq6vv\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966451 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966524 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966633 4740 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c1941da-08f1-48e5-afcd-99d376601f66-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966694 4740 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.966748 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:36 crc kubenswrapper[4740]: I0130 16:24:36.972722 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.031517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.058760 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data" (OuterVolumeSpecName: "config-data") pod "5c1941da-08f1-48e5-afcd-99d376601f66" (UID: "5c1941da-08f1-48e5-afcd-99d376601f66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.069766 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.070024 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.070118 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1941da-08f1-48e5-afcd-99d376601f66-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.090912 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6mkbr" event={"ID":"c7949eee-0a06-4fe5-9cfc-b08bfdecc24a","Type":"ContainerDied","Data":"1708162fbffe0fa30b28bcec7d3a8956352d1a1030b13bf1d0d98d27329dfbb7"} Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.090981 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1708162fbffe0fa30b28bcec7d3a8956352d1a1030b13bf1d0d98d27329dfbb7" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.091103 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6mkbr" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.095732 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c1941da-08f1-48e5-afcd-99d376601f66","Type":"ContainerDied","Data":"5277b4e82603d45e00beaf93c57d203aceb833eae7fdce252b79fa607e583bb3"} Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.095802 4740 scope.go:117] "RemoveContainer" containerID="3158d9cb2aa79d74dbdd6d888214270f15b9c00e202ce2738678f8a6631f5bf4" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.096047 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.174237 4740 scope.go:117] "RemoveContainer" containerID="2940ccc79857644c685aa97e8a272ba0ffb34636a71a14fac943893790cdfc01" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.186636 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-kfpzr"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.207600 4740 scope.go:117] "RemoveContainer" containerID="d1cb4b028f70f624ddecec8ad53cec657e59d675e156b3f5dc2eb2f73416dede" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.212415 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-kfpzr"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.235428 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.267405 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.268842 4740 scope.go:117] "RemoveContainer" containerID="7a851983110fc9fcf2276523f647775a334c3e02b1d99c4602e5eb175b64f243" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334153 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:37 crc kubenswrapper[4740]: E0130 16:24:37.334838 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-notification-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334857 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-notification-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: E0130 16:24:37.334878 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="sg-core" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334884 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="sg-core" Jan 30 16:24:37 crc kubenswrapper[4740]: E0130 16:24:37.334901 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-central-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334909 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-central-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: E0130 16:24:37.334921 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" containerName="cloudkitty-db-sync" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334929 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" containerName="cloudkitty-db-sync" Jan 30 16:24:37 crc kubenswrapper[4740]: E0130 16:24:37.334958 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="proxy-httpd" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.334963 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="proxy-httpd" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.335238 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" containerName="cloudkitty-db-sync" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.335250 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="proxy-httpd" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.335260 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-central-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.335280 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="sg-core" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.335299 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" containerName="ceilometer-notification-agent" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.340598 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.349698 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.349994 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.350736 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.375233 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1941da-08f1-48e5-afcd-99d376601f66" path="/var/lib/kubelet/pods/5c1941da-08f1-48e5-afcd-99d376601f66/volumes" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.378311 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec9300e-65b7-42ea-abac-2de63aaa9616" path="/var/lib/kubelet/pods/fec9300e-65b7-42ea-abac-2de63aaa9616/volumes" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.379346 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.379470 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-s62jp"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.389833 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.400001 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-s62jp"] Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.400459 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513602 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513668 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513705 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-scripts\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513751 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-log-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513785 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-run-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513827 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-config-data\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513879 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513900 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513954 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dj6j\" (UniqueName: \"kubernetes.io/projected/ca7e8237-6940-4092-8df0-97fa0865cc46-kube-api-access-2dj6j\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.513993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.514067 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.514087 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.514118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617392 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617440 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-scripts\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617502 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-log-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617561 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-run-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-config-data\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617679 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617713 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617773 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dj6j\" (UniqueName: \"kubernetes.io/projected/ca7e8237-6940-4092-8df0-97fa0865cc46-kube-api-access-2dj6j\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617809 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617901 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617929 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.617975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.618645 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-log-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.619154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca7e8237-6940-4092-8df0-97fa0865cc46-run-httpd\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.623681 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.623955 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.625071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-scripts\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.626098 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.627699 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.633793 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.640986 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.641901 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.650599 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca7e8237-6940-4092-8df0-97fa0865cc46-config-data\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.651690 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dj6j\" (UniqueName: \"kubernetes.io/projected/ca7e8237-6940-4092-8df0-97fa0865cc46-kube-api-access-2dj6j\") pod \"ceilometer-0\" (UID: \"ca7e8237-6940-4092-8df0-97fa0865cc46\") " pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.654735 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq\") pod \"cloudkitty-storageinit-s62jp\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.672882 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.737100 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:37 crc kubenswrapper[4740]: I0130 16:24:37.899111 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="rabbitmq" containerID="cri-o://ed7fcb5e92eb6a6f5bfecb849788ba619d6e0abb4dc66d86842facb2b072d7a0" gracePeriod=604794 Jan 30 16:24:38 crc kubenswrapper[4740]: I0130 16:24:38.284989 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 16:24:38 crc kubenswrapper[4740]: I0130 16:24:38.399662 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-s62jp"] Jan 30 16:24:39 crc kubenswrapper[4740]: I0130 16:24:39.154579 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca7e8237-6940-4092-8df0-97fa0865cc46","Type":"ContainerStarted","Data":"19a0361eff757b12c21687682e405f79a5ca5acd1595706f35f5d035f14d7f33"} Jan 30 16:24:39 crc kubenswrapper[4740]: I0130 16:24:39.157705 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-s62jp" event={"ID":"2c4b4df0-4413-4c6d-bba1-14398f0acb36","Type":"ContainerStarted","Data":"4953016886d1c9584b7136093872873275328ea09d160cac58cdb0ec43efa2bf"} Jan 30 16:24:39 crc kubenswrapper[4740]: I0130 16:24:39.157796 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-s62jp" event={"ID":"2c4b4df0-4413-4c6d-bba1-14398f0acb36","Type":"ContainerStarted","Data":"716a182e9008ce7c5c646173c3c925af139ffa0b6336f02ac440789c551e92ca"} Jan 30 16:24:39 crc kubenswrapper[4740]: I0130 16:24:39.190671 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-s62jp" podStartSLOduration=2.190635605 podStartE2EDuration="2.190635605s" podCreationTimestamp="2026-01-30 16:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:39.178972295 +0000 UTC m=+1727.816034914" watchObservedRunningTime="2026-01-30 16:24:39.190635605 +0000 UTC m=+1727.827698194" Jan 30 16:24:41 crc kubenswrapper[4740]: I0130 16:24:41.193414 4740 generic.go:334] "Generic (PLEG): container finished" podID="2c4b4df0-4413-4c6d-bba1-14398f0acb36" containerID="4953016886d1c9584b7136093872873275328ea09d160cac58cdb0ec43efa2bf" exitCode=0 Jan 30 16:24:41 crc kubenswrapper[4740]: I0130 16:24:41.193502 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-s62jp" event={"ID":"2c4b4df0-4413-4c6d-bba1-14398f0acb36","Type":"ContainerDied","Data":"4953016886d1c9584b7136093872873275328ea09d160cac58cdb0ec43efa2bf"} Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.176295 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.228662 4740 generic.go:334] "Generic (PLEG): container finished" podID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerID="f5726cc466db1ba55b4c5679b88c91afdce24f273892f7a01b8a4fe90f232d59" exitCode=0 Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.228770 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerDied","Data":"f5726cc466db1ba55b4c5679b88c91afdce24f273892f7a01b8a4fe90f232d59"} Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.234486 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-s62jp" event={"ID":"2c4b4df0-4413-4c6d-bba1-14398f0acb36","Type":"ContainerDied","Data":"716a182e9008ce7c5c646173c3c925af139ffa0b6336f02ac440789c551e92ca"} Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.234554 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="716a182e9008ce7c5c646173c3c925af139ffa0b6336f02ac440789c551e92ca" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.234709 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-s62jp" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.295263 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq\") pod \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.295917 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data\") pod \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.295984 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle\") pod \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.296121 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs\") pod \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.296366 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts\") pod \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\" (UID: \"2c4b4df0-4413-4c6d-bba1-14398f0acb36\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.303949 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts" (OuterVolumeSpecName: "scripts") pod "2c4b4df0-4413-4c6d-bba1-14398f0acb36" (UID: "2c4b4df0-4413-4c6d-bba1-14398f0acb36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.312283 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs" (OuterVolumeSpecName: "certs") pod "2c4b4df0-4413-4c6d-bba1-14398f0acb36" (UID: "2c4b4df0-4413-4c6d-bba1-14398f0acb36"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.314087 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq" (OuterVolumeSpecName: "kube-api-access-6gqdq") pod "2c4b4df0-4413-4c6d-bba1-14398f0acb36" (UID: "2c4b4df0-4413-4c6d-bba1-14398f0acb36"). InnerVolumeSpecName "kube-api-access-6gqdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.367375 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c4b4df0-4413-4c6d-bba1-14398f0acb36" (UID: "2c4b4df0-4413-4c6d-bba1-14398f0acb36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.383038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data" (OuterVolumeSpecName: "config-data") pod "2c4b4df0-4413-4c6d-bba1-14398f0acb36" (UID: "2c4b4df0-4413-4c6d-bba1-14398f0acb36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.443389 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.443448 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gqdq\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-kube-api-access-6gqdq\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.443467 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.443483 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4b4df0-4413-4c6d-bba1-14398f0acb36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.443496 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2c4b4df0-4413-4c6d-bba1-14398f0acb36-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.451202 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.451270 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.451633 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api-log" containerID="cri-o://e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136" gracePeriod=30 Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.459737 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="72171841-1d34-41c7-80b8-d9ff3550e843" containerName="cloudkitty-proc" containerID="cri-o://5cdbbb74ff5bfc23057a5254b453a1465d9bcf1c6706ef5b70b40275b3f8734a" gracePeriod=30 Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.460599 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" containerID="cri-o://463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63" gracePeriod=30 Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.734774 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.867401 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869008 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869102 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p6zz\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869185 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869245 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869272 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869339 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869389 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869479 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.869628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie\") pod \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\" (UID: \"3aae2bad-ea00-4d1f-a30f-a8891e15ad05\") " Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.883547 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.902363 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.908205 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.910651 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.913927 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz" (OuterVolumeSpecName: "kube-api-access-7p6zz") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "kube-api-access-7p6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.914885 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info" (OuterVolumeSpecName: "pod-info") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.930494 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980633 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p6zz\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-kube-api-access-7p6zz\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980667 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980677 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980689 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980697 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980707 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:43 crc kubenswrapper[4740]: I0130 16:24:43.980716 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.018133 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf" (OuterVolumeSpecName: "server-conf") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.034063 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data" (OuterVolumeSpecName: "config-data") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.087508 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.087551 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.160070 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.182131 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd" (OuterVolumeSpecName: "persistence") pod "3aae2bad-ea00-4d1f-a30f-a8891e15ad05" (UID: "3aae2bad-ea00-4d1f-a30f-a8891e15ad05"). InnerVolumeSpecName "pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.190343 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") on node \"crc\" " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.190417 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3aae2bad-ea00-4d1f-a30f-a8891e15ad05-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.263165 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca7e8237-6940-4092-8df0-97fa0865cc46","Type":"ContainerStarted","Data":"4e57d726bfc46473c93ec0792e6719c6aee62450ebf6098686504ce257095b05"} Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.264813 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3aae2bad-ea00-4d1f-a30f-a8891e15ad05","Type":"ContainerDied","Data":"9ff1c1329888872d037b7035605cc6c8bfa00ec684c8aebfc964a618fd17eb69"} Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.264855 4740 scope.go:117] "RemoveContainer" containerID="f5726cc466db1ba55b4c5679b88c91afdce24f273892f7a01b8a4fe90f232d59" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.265029 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.283472 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.284390 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd") on node "crc" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.293154 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.300660 4740 generic.go:334] "Generic (PLEG): container finished" podID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerID="ed7fcb5e92eb6a6f5bfecb849788ba619d6e0abb4dc66d86842facb2b072d7a0" exitCode=0 Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.300817 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerDied","Data":"ed7fcb5e92eb6a6f5bfecb849788ba619d6e0abb4dc66d86842facb2b072d7a0"} Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.317441 4740 generic.go:334] "Generic (PLEG): container finished" podID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerID="e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136" exitCode=143 Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.317513 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerDied","Data":"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136"} Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.337134 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:24:44 crc kubenswrapper[4740]: E0130 16:24:44.337423 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.489135 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.510040 4740 scope.go:117] "RemoveContainer" containerID="9c5383bc7a9fd9a7eb8cc88dd1a216cd4547e09dff876e8f0bc0bc92048a1f2c" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.517063 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.554050 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:44 crc kubenswrapper[4740]: E0130 16:24:44.555323 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="rabbitmq" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.555550 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="rabbitmq" Jan 30 16:24:44 crc kubenswrapper[4740]: E0130 16:24:44.555593 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="setup-container" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.555600 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="setup-container" Jan 30 16:24:44 crc kubenswrapper[4740]: E0130 16:24:44.555617 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b4df0-4413-4c6d-bba1-14398f0acb36" containerName="cloudkitty-storageinit" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.555626 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b4df0-4413-4c6d-bba1-14398f0acb36" containerName="cloudkitty-storageinit" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.560211 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" containerName="rabbitmq" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.560376 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4b4df0-4413-4c6d-bba1-14398f0acb36" containerName="cloudkitty-storageinit" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.563611 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.575842 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.576094 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.576447 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.576597 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.576742 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.576878 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wn5g9" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.577009 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.591975 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.712890 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.713157 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.713233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.713871 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714061 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83485d04-0a7f-45d0-9a43-66412e5e577e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714102 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-config-data\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714370 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714539 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83485d04-0a7f-45d0-9a43-66412e5e577e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714664 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6kg\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-kube-api-access-th6kg\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.714741 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.809108 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.819717 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821058 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821108 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83485d04-0a7f-45d0-9a43-66412e5e577e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821129 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-config-data\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821171 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821218 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83485d04-0a7f-45d0-9a43-66412e5e577e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821246 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821270 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6kg\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-kube-api-access-th6kg\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821307 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821368 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821410 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.821872 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.827062 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.828288 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-config-data\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.830507 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83485d04-0a7f-45d0-9a43-66412e5e577e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.830837 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.835228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.838323 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.843804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83485d04-0a7f-45d0-9a43-66412e5e577e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.857725 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83485d04-0a7f-45d0-9a43-66412e5e577e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.860689 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.860726 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e3333af33d12b2330cb429592689dfb6f04fa8dbabb80e6e509a7e63f9ce6eca/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.876042 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6kg\" (UniqueName: \"kubernetes.io/projected/83485d04-0a7f-45d0-9a43-66412e5e577e-kube-api-access-th6kg\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.926901 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.926977 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927090 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927119 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927229 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927281 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927447 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxb2h\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927480 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.927531 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls\") pod \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\" (UID: \"860fd88f-2b83-4fc3-8411-7d10dc1281b2\") " Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.940845 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.941202 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.941278 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.941828 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:44 crc kubenswrapper[4740]: I0130 16:24:44.959018 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.036735 4740 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.036774 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.036784 4740 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/860fd88f-2b83-4fc3-8411-7d10dc1281b2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.036797 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.036805 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.085073 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h" (OuterVolumeSpecName: "kube-api-access-qxb2h") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "kube-api-access-qxb2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.086914 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info" (OuterVolumeSpecName: "pod-info") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.139701 4740 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/860fd88f-2b83-4fc3-8411-7d10dc1281b2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.139973 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxb2h\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-kube-api-access-qxb2h\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.153825 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data" (OuterVolumeSpecName: "config-data") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.221689 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:24:45 crc kubenswrapper[4740]: E0130 16:24:45.222379 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="setup-container" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.222394 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="setup-container" Jan 30 16:24:45 crc kubenswrapper[4740]: E0130 16:24:45.222425 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="rabbitmq" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.222431 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="rabbitmq" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.222655 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" containerName="rabbitmq" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.224387 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.246908 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.266282 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374277 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374454 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374558 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374588 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfqx\" (UniqueName: \"kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.374651 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.375325 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf" (OuterVolumeSpecName: "server-conf") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.388033 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aae2bad-ea00-4d1f-a30f-a8891e15ad05" path="/var/lib/kubelet/pods/3aae2bad-ea00-4d1f-a30f-a8891e15ad05/volumes" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.396183 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.416113 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca7e8237-6940-4092-8df0-97fa0865cc46","Type":"ContainerStarted","Data":"3e514d999ab2a5ce72c6d34caa12118442a8cf02164fc2593119016eb9a1ca94"} Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.459875 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.467254 4740 generic.go:334] "Generic (PLEG): container finished" podID="72171841-1d34-41c7-80b8-d9ff3550e843" containerID="5cdbbb74ff5bfc23057a5254b453a1465d9bcf1c6706ef5b70b40275b3f8734a" exitCode=0 Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.467398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72171841-1d34-41c7-80b8-d9ff3550e843","Type":"ContainerDied","Data":"5cdbbb74ff5bfc23057a5254b453a1465d9bcf1c6706ef5b70b40275b3f8734a"} Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.473157 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"860fd88f-2b83-4fc3-8411-7d10dc1281b2","Type":"ContainerDied","Data":"59c0290f1bf07d518868e326b92d32736b009f1ca4821cdbd051736aef7f3d51"} Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.473236 4740 scope.go:117] "RemoveContainer" containerID="ed7fcb5e92eb6a6f5bfecb849788ba619d6e0abb4dc66d86842facb2b072d7a0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.473446 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.482092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.485058 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.487683 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.488293 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.490495 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.490658 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.490933 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.491069 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qfqx\" (UniqueName: \"kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.491255 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.491658 4740 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/860fd88f-2b83-4fc3-8411-7d10dc1281b2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.491791 4740 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/860fd88f-2b83-4fc3-8411-7d10dc1281b2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.492971 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.494918 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.495009 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.496245 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.527158 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qfqx\" (UniqueName: \"kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx\") pod \"dnsmasq-dns-dc7c944bf-97rv9\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.529803 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d" (OuterVolumeSpecName: "persistence") pod "860fd88f-2b83-4fc3-8411-7d10dc1281b2" (UID: "860fd88f-2b83-4fc3-8411-7d10dc1281b2"). InnerVolumeSpecName "pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.533214 4740 scope.go:117] "RemoveContainer" containerID="18830ee869670f7ca9913ca98acb195b4ffa37625511af68394e4945e405be8b" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.559854 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-11b0e7af-f0ac-4a5c-8797-bda6d4cf8ccd\") pod \"rabbitmq-server-0\" (UID: \"83485d04-0a7f-45d0-9a43-66412e5e577e\") " pod="openstack/rabbitmq-server-0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.603123 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") on node \"crc\" " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.641504 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.700096 4740 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.703048 4740 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d") on node "crc" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.713417 4740 reconciler_common.go:293] "Volume detached for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.800296 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.838717 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.884510 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.913959 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.918753 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.919400 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w4pq\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.919516 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.919554 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.919663 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.919743 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle\") pod \"72171841-1d34-41c7-80b8-d9ff3550e843\" (UID: \"72171841-1d34-41c7-80b8-d9ff3550e843\") " Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.930142 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts" (OuterVolumeSpecName: "scripts") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.964122 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.968855 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq" (OuterVolumeSpecName: "kube-api-access-4w4pq") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "kube-api-access-4w4pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.974686 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:45 crc kubenswrapper[4740]: E0130 16:24:45.975403 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72171841-1d34-41c7-80b8-d9ff3550e843" containerName="cloudkitty-proc" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.975420 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="72171841-1d34-41c7-80b8-d9ff3550e843" containerName="cloudkitty-proc" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.975674 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="72171841-1d34-41c7-80b8-d9ff3550e843" containerName="cloudkitty-proc" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.980688 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.984790 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.985221 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-mfttj" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.985462 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.985757 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.986641 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.986787 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.986896 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.990544 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs" (OuterVolumeSpecName: "certs") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:45 crc kubenswrapper[4740]: I0130 16:24:45.991281 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.038478 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.038623 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.038638 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w4pq\" (UniqueName: \"kubernetes.io/projected/72171841-1d34-41c7-80b8-d9ff3550e843-kube-api-access-4w4pq\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.038671 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.099373 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.145850 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t54fv\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-kube-api-access-t54fv\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146012 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146059 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146249 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146855 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146925 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.146959 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.147008 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.147090 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e1f0777-9068-4928-a4e8-971dfcbf905c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.147195 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e1f0777-9068-4928-a4e8-971dfcbf905c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.152138 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.206719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data" (OuterVolumeSpecName: "config-data") pod "72171841-1d34-41c7-80b8-d9ff3550e843" (UID: "72171841-1d34-41c7-80b8-d9ff3550e843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t54fv\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-kube-api-access-t54fv\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255467 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255503 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255602 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255670 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255764 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255813 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255837 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255881 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.255975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e1f0777-9068-4928-a4e8-971dfcbf905c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.256066 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e1f0777-9068-4928-a4e8-971dfcbf905c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.256165 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72171841-1d34-41c7-80b8-d9ff3550e843-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.257468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.258621 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.258690 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.258887 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.264788 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e1f0777-9068-4928-a4e8-971dfcbf905c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.265194 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.267211 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e1f0777-9068-4928-a4e8-971dfcbf905c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.267250 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.271423 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e1f0777-9068-4928-a4e8-971dfcbf905c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.273333 4740 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.273424 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da441bad2f94d37b9f999b87d703bd642232cb69fb10e554c18e948912ce445b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.282838 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t54fv\" (UniqueName: \"kubernetes.io/projected/1e1f0777-9068-4928-a4e8-971dfcbf905c-kube-api-access-t54fv\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.302930 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.360614 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5688a0ed-a0f4-45fb-81f9-4cb1096a187d\") pod \"rabbitmq-cell1-server-0\" (UID: \"1e1f0777-9068-4928-a4e8-971dfcbf905c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.372854 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.529081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" event={"ID":"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d","Type":"ContainerStarted","Data":"30d1e01067a0b2b54c95e7521a48210f48da338c33d8d728381e4eaec9f67fef"} Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.539081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca7e8237-6940-4092-8df0-97fa0865cc46","Type":"ContainerStarted","Data":"1790adf25a8a7d2c5723adae81c1ab788070deb6956c3e2f5a09b52d71ee9f86"} Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.549904 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.549563 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72171841-1d34-41c7-80b8-d9ff3550e843","Type":"ContainerDied","Data":"d0af63d377ab3fa8cd979230d867bfd66ae5560ed34148ef03e38bbb9948e09c"} Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.551778 4740 scope.go:117] "RemoveContainer" containerID="5cdbbb74ff5bfc23057a5254b453a1465d9bcf1c6706ef5b70b40275b3f8734a" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.647829 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.197:8889/healthcheck\": read tcp 10.217.0.2:35594->10.217.0.197:8889: read: connection reset by peer" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.656380 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.708577 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.756192 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.793523 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.795812 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.799058 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.808895 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.975713 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.975870 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgxj\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-kube-api-access-fdgxj\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.975936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.975968 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.976013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:46 crc kubenswrapper[4740]: I0130 16:24:46.976030 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-certs\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.034675 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.088781 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.094288 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.094544 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.094610 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-certs\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.094860 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.095068 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdgxj\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-kube-api-access-fdgxj\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.102395 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.103606 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-certs\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.116440 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.132091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-scripts\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.133185 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f12295c-9646-4ff9-854d-542e75e78e5a-config-data\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.147253 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdgxj\" (UniqueName: \"kubernetes.io/projected/1f12295c-9646-4ff9-854d-542e75e78e5a-kube-api-access-fdgxj\") pod \"cloudkitty-proc-0\" (UID: \"1f12295c-9646-4ff9-854d-542e75e78e5a\") " pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.258614 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.305877 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.371997 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72171841-1d34-41c7-80b8-d9ff3550e843" path="/var/lib/kubelet/pods/72171841-1d34-41c7-80b8-d9ff3550e843/volumes" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.373017 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860fd88f-2b83-4fc3-8411-7d10dc1281b2" path="/var/lib/kubelet/pods/860fd88f-2b83-4fc3-8411-7d10dc1281b2/volumes" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.416626 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.416704 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l8j9\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.416947 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.416978 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417021 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417131 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417173 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417330 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.417493 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs" (OuterVolumeSpecName: "logs") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.418492 4740 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-logs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.423989 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.424108 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9" (OuterVolumeSpecName: "kube-api-access-5l8j9") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "kube-api-access-5l8j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.425289 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts" (OuterVolumeSpecName: "scripts") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.435965 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs" (OuterVolumeSpecName: "certs") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.458823 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data" (OuterVolumeSpecName: "config-data") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.520517 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.522215 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") pod \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\" (UID: \"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf\") " Jan 30 16:24:47 crc kubenswrapper[4740]: W0130 16:24:47.522498 4740 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf/volumes/kubernetes.io~secret/internal-tls-certs Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.523114 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525670 4740 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525718 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525733 4740 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525750 4740 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525763 4740 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.525776 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l8j9\" (UniqueName: \"kubernetes.io/projected/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-kube-api-access-5l8j9\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.529662 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.594246 4740 generic.go:334] "Generic (PLEG): container finished" podID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerID="463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63" exitCode=0 Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.594370 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerDied","Data":"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63"} Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.594423 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"a32053fc-f2e3-40c5-8702-fbd19ecd9bbf","Type":"ContainerDied","Data":"d41a2c727b04522a2b06a8e8b1e40bd4505f18a37c610317435484894ca33709"} Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.594449 4740 scope.go:117] "RemoveContainer" containerID="463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.594642 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.595474 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" (UID: "a32053fc-f2e3-40c5-8702-fbd19ecd9bbf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.604898 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e1f0777-9068-4928-a4e8-971dfcbf905c","Type":"ContainerStarted","Data":"8729dcff92d36553e842bc79ec84319c4d6aaf13c6452df581c6eecc2270d07d"} Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.606939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83485d04-0a7f-45d0-9a43-66412e5e577e","Type":"ContainerStarted","Data":"13433616f53c07c6a166a87b887b362fefac80be8ca457b5761b9bcfeb8ae593"} Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.615087 4740 generic.go:334] "Generic (PLEG): container finished" podID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerID="5abf083ba68b7b68604af73cee652d873756f388e609bc8f92586a65b46b6ac1" exitCode=0 Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.615139 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" event={"ID":"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d","Type":"ContainerDied","Data":"5abf083ba68b7b68604af73cee652d873756f388e609bc8f92586a65b46b6ac1"} Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.639294 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.639335 4740 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.658613 4740 scope.go:117] "RemoveContainer" containerID="e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.714238 4740 scope.go:117] "RemoveContainer" containerID="463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63" Jan 30 16:24:47 crc kubenswrapper[4740]: E0130 16:24:47.714776 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63\": container with ID starting with 463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63 not found: ID does not exist" containerID="463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.714829 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63"} err="failed to get container status \"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63\": rpc error: code = NotFound desc = could not find container \"463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63\": container with ID starting with 463a5c4f3a80c8a8eef5791772982ba9e988e2e2ffa73af79fcc0f4659108e63 not found: ID does not exist" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.714866 4740 scope.go:117] "RemoveContainer" containerID="e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136" Jan 30 16:24:47 crc kubenswrapper[4740]: E0130 16:24:47.718774 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136\": container with ID starting with e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136 not found: ID does not exist" containerID="e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.718824 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136"} err="failed to get container status \"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136\": rpc error: code = NotFound desc = could not find container \"e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136\": container with ID starting with e2d0299c67897bb27c2545c3944fcd9bff0f1af80afa48c76dc9e6c1c5ec6136 not found: ID does not exist" Jan 30 16:24:47 crc kubenswrapper[4740]: I0130 16:24:47.944852 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Jan 30 16:24:48 crc kubenswrapper[4740]: W0130 16:24:48.081200 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f12295c_9646_4ff9_854d_542e75e78e5a.slice/crio-eaeb04b8ce79e8df0eec6399c2f5525c162b37a5c3d4a1978715471995660c6c WatchSource:0}: Error finding container eaeb04b8ce79e8df0eec6399c2f5525c162b37a5c3d4a1978715471995660c6c: Status 404 returned error can't find the container with id eaeb04b8ce79e8df0eec6399c2f5525c162b37a5c3d4a1978715471995660c6c Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.629647 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"1f12295c-9646-4ff9-854d-542e75e78e5a","Type":"ContainerStarted","Data":"eaeb04b8ce79e8df0eec6399c2f5525c162b37a5c3d4a1978715471995660c6c"} Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.932562 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.956569 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.970009 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:48 crc kubenswrapper[4740]: E0130 16:24:48.970651 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api-log" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.970674 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api-log" Jan 30 16:24:48 crc kubenswrapper[4740]: E0130 16:24:48.970683 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.970693 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.970955 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.970987 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" containerName="cloudkitty-api-log" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.972434 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.976006 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.977053 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Jan 30 16:24:48 crc kubenswrapper[4740]: I0130 16:24:48.977213 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.012305 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074321 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074503 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074697 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ae2907-297d-49dc-99ed-eda202004650-logs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074724 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074894 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-scripts\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074934 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.074965 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7lg5\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-kube-api-access-x7lg5\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.075022 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.075110 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184045 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-scripts\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184115 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184140 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7lg5\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-kube-api-access-x7lg5\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184175 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184220 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184267 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184310 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184382 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ae2907-297d-49dc-99ed-eda202004650-logs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.184399 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.193657 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.197935 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ae2907-297d-49dc-99ed-eda202004650-logs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.200446 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.204558 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.206076 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-scripts\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.218804 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.219418 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-certs\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.222951 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1ae2907-297d-49dc-99ed-eda202004650-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.228842 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7lg5\" (UniqueName: \"kubernetes.io/projected/b1ae2907-297d-49dc-99ed-eda202004650-kube-api-access-x7lg5\") pod \"cloudkitty-api-0\" (UID: \"b1ae2907-297d-49dc-99ed-eda202004650\") " pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.309617 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.352164 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32053fc-f2e3-40c5-8702-fbd19ecd9bbf" path="/var/lib/kubelet/pods/a32053fc-f2e3-40c5-8702-fbd19ecd9bbf/volumes" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.643138 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e1f0777-9068-4928-a4e8-971dfcbf905c","Type":"ContainerStarted","Data":"0f80ab613883905c3617266492f76388217941b68d0dc61a6e8fe503b18bbf03"} Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.648519 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83485d04-0a7f-45d0-9a43-66412e5e577e","Type":"ContainerStarted","Data":"0aecb375f291e3f079a609a62a9032b57665546c21b2fb75d530e420b13c129f"} Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.652813 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" event={"ID":"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d","Type":"ContainerStarted","Data":"bdecc1e44c2def7996c7f76b79271c9cc692d0bd8c7e9d2b26eb8c9ff57fb07a"} Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.653801 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.656595 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"1f12295c-9646-4ff9-854d-542e75e78e5a","Type":"ContainerStarted","Data":"b1ca51b045c99ac577cefd26e68f10f0ecdb6659d2660a9836c4e191de7bd6c1"} Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.666012 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca7e8237-6940-4092-8df0-97fa0865cc46","Type":"ContainerStarted","Data":"43b9f2e0ca5b5433eecadcc64e01e33189dada59569b30ee605fbc939f82447a"} Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.666566 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.716833 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" podStartSLOduration=4.7167995000000005 podStartE2EDuration="4.7167995s" podCreationTimestamp="2026-01-30 16:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:49.69547276 +0000 UTC m=+1738.332535359" watchObservedRunningTime="2026-01-30 16:24:49.7167995 +0000 UTC m=+1738.353862109" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.735229 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.120633406 podStartE2EDuration="12.735196767s" podCreationTimestamp="2026-01-30 16:24:37 +0000 UTC" firstStartedPulling="2026-01-30 16:24:38.299412749 +0000 UTC m=+1726.936475348" lastFinishedPulling="2026-01-30 16:24:48.9139761 +0000 UTC m=+1737.551038709" observedRunningTime="2026-01-30 16:24:49.726165613 +0000 UTC m=+1738.363228212" watchObservedRunningTime="2026-01-30 16:24:49.735196767 +0000 UTC m=+1738.372259376" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.757110 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=3.555652869 podStartE2EDuration="3.757086151s" podCreationTimestamp="2026-01-30 16:24:46 +0000 UTC" firstStartedPulling="2026-01-30 16:24:48.083893895 +0000 UTC m=+1736.720956534" lastFinishedPulling="2026-01-30 16:24:48.285327217 +0000 UTC m=+1736.922389816" observedRunningTime="2026-01-30 16:24:49.754327052 +0000 UTC m=+1738.391389651" watchObservedRunningTime="2026-01-30 16:24:49.757086151 +0000 UTC m=+1738.394148750" Jan 30 16:24:49 crc kubenswrapper[4740]: I0130 16:24:49.835293 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Jan 30 16:24:50 crc kubenswrapper[4740]: I0130 16:24:50.682569 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b1ae2907-297d-49dc-99ed-eda202004650","Type":"ContainerStarted","Data":"be69e2bb085d420da4a2fde57dd78c6088b527e5c12b15a89cc640dba86aee0a"} Jan 30 16:24:50 crc kubenswrapper[4740]: I0130 16:24:50.683043 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b1ae2907-297d-49dc-99ed-eda202004650","Type":"ContainerStarted","Data":"4404d7fef9622ae3962bcb99ae07ed4cbcee5d2ab5588e9a780562e51088bf71"} Jan 30 16:24:50 crc kubenswrapper[4740]: I0130 16:24:50.683060 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b1ae2907-297d-49dc-99ed-eda202004650","Type":"ContainerStarted","Data":"a29e66a8232c582495e8680579b745c570ee8ed7c3bd7908ea19ceb65827acba"} Jan 30 16:24:50 crc kubenswrapper[4740]: I0130 16:24:50.683417 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Jan 30 16:24:50 crc kubenswrapper[4740]: I0130 16:24:50.721165 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.721136515 podStartE2EDuration="2.721136515s" podCreationTimestamp="2026-01-30 16:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:50.707384303 +0000 UTC m=+1739.344446902" watchObservedRunningTime="2026-01-30 16:24:50.721136515 +0000 UTC m=+1739.358199114" Jan 30 16:24:55 crc kubenswrapper[4740]: I0130 16:24:55.644620 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:24:55 crc kubenswrapper[4740]: I0130 16:24:55.730747 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:24:55 crc kubenswrapper[4740]: I0130 16:24:55.731145 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="dnsmasq-dns" containerID="cri-o://9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05" gracePeriod=10 Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.028987 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-2bdvp"] Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.032695 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.058044 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-2bdvp"] Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.119253 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.119531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.119760 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.120006 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.120084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.120121 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j8fw\" (UniqueName: \"kubernetes.io/projected/03fa751d-d601-4f94-8cd6-3607c005211c-kube-api-access-4j8fw\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.121162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-config\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.229402 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-config\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.229624 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.229731 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.229815 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.229951 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.230074 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.230150 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j8fw\" (UniqueName: \"kubernetes.io/projected/03fa751d-d601-4f94-8cd6-3607c005211c-kube-api-access-4j8fw\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.231897 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-config\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.232684 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.233412 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.234091 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.235100 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.236063 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03fa751d-d601-4f94-8cd6-3607c005211c-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.275128 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j8fw\" (UniqueName: \"kubernetes.io/projected/03fa751d-d601-4f94-8cd6-3607c005211c-kube-api-access-4j8fw\") pod \"dnsmasq-dns-c4b758ff5-2bdvp\" (UID: \"03fa751d-d601-4f94-8cd6-3607c005211c\") " pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.364762 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.606384 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.644037 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.644650 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.644699 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.644941 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vqz4\" (UniqueName: \"kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.644981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.645017 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb\") pod \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\" (UID: \"82b4d3c6-f45f-4677-8369-ee9a6f1746b7\") " Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.723117 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4" (OuterVolumeSpecName: "kube-api-access-8vqz4") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "kube-api-access-8vqz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.750740 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vqz4\" (UniqueName: \"kubernetes.io/projected/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-kube-api-access-8vqz4\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.870365 4740 generic.go:334] "Generic (PLEG): container finished" podID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerID="9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05" exitCode=0 Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.870447 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" event={"ID":"82b4d3c6-f45f-4677-8369-ee9a6f1746b7","Type":"ContainerDied","Data":"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05"} Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.870495 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" event={"ID":"82b4d3c6-f45f-4677-8369-ee9a6f1746b7","Type":"ContainerDied","Data":"ce427d2d38a04e0d66835ab0f37e64418e8bf1ffbe6eb378064607bba7c3b937"} Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.870520 4740 scope.go:117] "RemoveContainer" containerID="9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.870828 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-sv6kp" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.921248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.990821 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.996063 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:56 crc kubenswrapper[4740]: I0130 16:24:56.996112 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.016035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.050117 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config" (OuterVolumeSpecName: "config") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.068059 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82b4d3c6-f45f-4677-8369-ee9a6f1746b7" (UID: "82b4d3c6-f45f-4677-8369-ee9a6f1746b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.098857 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.098928 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.098957 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b4d3c6-f45f-4677-8369-ee9a6f1746b7-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.144394 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-2bdvp"] Jan 30 16:24:57 crc kubenswrapper[4740]: W0130 16:24:57.190605 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03fa751d_d601_4f94_8cd6_3607c005211c.slice/crio-6a63da30854cc1d90fafaf5fb2ae47633602cf58a2f36af1ca1f4e1996dceaa5 WatchSource:0}: Error finding container 6a63da30854cc1d90fafaf5fb2ae47633602cf58a2f36af1ca1f4e1996dceaa5: Status 404 returned error can't find the container with id 6a63da30854cc1d90fafaf5fb2ae47633602cf58a2f36af1ca1f4e1996dceaa5 Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.190721 4740 scope.go:117] "RemoveContainer" containerID="740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.281853 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.283130 4740 scope.go:117] "RemoveContainer" containerID="9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.301242 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-sv6kp"] Jan 30 16:24:57 crc kubenswrapper[4740]: E0130 16:24:57.306011 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05\": container with ID starting with 9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05 not found: ID does not exist" containerID="9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.306102 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05"} err="failed to get container status \"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05\": rpc error: code = NotFound desc = could not find container \"9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05\": container with ID starting with 9cb699bd7cfcba96fa5512389c56eca2eaaedb85e1496a67fa046779d8b4bf05 not found: ID does not exist" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.306151 4740 scope.go:117] "RemoveContainer" containerID="740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873" Jan 30 16:24:57 crc kubenswrapper[4740]: E0130 16:24:57.313196 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873\": container with ID starting with 740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873 not found: ID does not exist" containerID="740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.313266 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873"} err="failed to get container status \"740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873\": rpc error: code = NotFound desc = could not find container \"740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873\": container with ID starting with 740cc3e7e14c53fd48c66234036400b60b80c0290ace0b88dbca7702055f6873 not found: ID does not exist" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.354043 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" path="/var/lib/kubelet/pods/82b4d3c6-f45f-4677-8369-ee9a6f1746b7/volumes" Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.884202 4740 generic.go:334] "Generic (PLEG): container finished" podID="03fa751d-d601-4f94-8cd6-3607c005211c" containerID="65a49b48882ef0bc7a62613fdd21aeab98eb6a76641fbac025149777822116a6" exitCode=0 Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.884304 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" event={"ID":"03fa751d-d601-4f94-8cd6-3607c005211c","Type":"ContainerDied","Data":"65a49b48882ef0bc7a62613fdd21aeab98eb6a76641fbac025149777822116a6"} Jan 30 16:24:57 crc kubenswrapper[4740]: I0130 16:24:57.884362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" event={"ID":"03fa751d-d601-4f94-8cd6-3607c005211c","Type":"ContainerStarted","Data":"6a63da30854cc1d90fafaf5fb2ae47633602cf58a2f36af1ca1f4e1996dceaa5"} Jan 30 16:24:58 crc kubenswrapper[4740]: I0130 16:24:58.913073 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" event={"ID":"03fa751d-d601-4f94-8cd6-3607c005211c","Type":"ContainerStarted","Data":"5535b17f98aa5eca2bebb07d45047ff022b4e9f63c75da8e7dfb5a4298c22633"} Jan 30 16:24:58 crc kubenswrapper[4740]: I0130 16:24:58.915777 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:24:59 crc kubenswrapper[4740]: I0130 16:24:59.345664 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:24:59 crc kubenswrapper[4740]: E0130 16:24:59.346538 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:25:06 crc kubenswrapper[4740]: I0130 16:25:06.366981 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" Jan 30 16:25:06 crc kubenswrapper[4740]: I0130 16:25:06.398180 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-2bdvp" podStartSLOduration=11.398143707 podStartE2EDuration="11.398143707s" podCreationTimestamp="2026-01-30 16:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:24:58.941708514 +0000 UTC m=+1747.578771123" watchObservedRunningTime="2026-01-30 16:25:06.398143707 +0000 UTC m=+1755.035206306" Jan 30 16:25:06 crc kubenswrapper[4740]: I0130 16:25:06.503826 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:25:06 crc kubenswrapper[4740]: I0130 16:25:06.504224 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="dnsmasq-dns" containerID="cri-o://bdecc1e44c2def7996c7f76b79271c9cc692d0bd8c7e9d2b26eb8c9ff57fb07a" gracePeriod=10 Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.069645 4740 generic.go:334] "Generic (PLEG): container finished" podID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerID="bdecc1e44c2def7996c7f76b79271c9cc692d0bd8c7e9d2b26eb8c9ff57fb07a" exitCode=0 Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.070048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" event={"ID":"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d","Type":"ContainerDied","Data":"bdecc1e44c2def7996c7f76b79271c9cc692d0bd8c7e9d2b26eb8c9ff57fb07a"} Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.333054 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.378687 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379034 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379067 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379141 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379228 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qfqx\" (UniqueName: \"kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379283 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.379312 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam\") pod \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\" (UID: \"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d\") " Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.438707 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx" (OuterVolumeSpecName: "kube-api-access-9qfqx") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "kube-api-access-9qfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.484388 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qfqx\" (UniqueName: \"kubernetes.io/projected/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-kube-api-access-9qfqx\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.507123 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.514025 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.521035 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.524273 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config" (OuterVolumeSpecName: "config") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.528343 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.536594 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" (UID: "16245ce0-4917-4b93-b5cc-54aaf3f4fe3d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587878 4740 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587930 4740 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587941 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587954 4740 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587969 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.587981 4740 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:07 crc kubenswrapper[4740]: I0130 16:25:07.687818 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.085433 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" event={"ID":"16245ce0-4917-4b93-b5cc-54aaf3f4fe3d","Type":"ContainerDied","Data":"30d1e01067a0b2b54c95e7521a48210f48da338c33d8d728381e4eaec9f67fef"} Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.085880 4740 scope.go:117] "RemoveContainer" containerID="bdecc1e44c2def7996c7f76b79271c9cc692d0bd8c7e9d2b26eb8c9ff57fb07a" Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.085596 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-97rv9" Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.127894 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.139200 4740 scope.go:117] "RemoveContainer" containerID="5abf083ba68b7b68604af73cee652d873756f388e609bc8f92586a65b46b6ac1" Jan 30 16:25:08 crc kubenswrapper[4740]: I0130 16:25:08.154112 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-97rv9"] Jan 30 16:25:09 crc kubenswrapper[4740]: I0130 16:25:09.356123 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" path="/var/lib/kubelet/pods/16245ce0-4917-4b93-b5cc-54aaf3f4fe3d/volumes" Jan 30 16:25:13 crc kubenswrapper[4740]: I0130 16:25:13.344240 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:25:13 crc kubenswrapper[4740]: E0130 16:25:13.345047 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.356409 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6"] Jan 30 16:25:19 crc kubenswrapper[4740]: E0130 16:25:19.359067 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="init" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359141 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="init" Jan 30 16:25:19 crc kubenswrapper[4740]: E0130 16:25:19.359208 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359258 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: E0130 16:25:19.359330 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359410 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: E0130 16:25:19.359490 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="init" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359549 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="init" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359829 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="16245ce0-4917-4b93-b5cc-54aaf3f4fe3d" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.359901 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b4d3c6-f45f-4677-8369-ee9a6f1746b7" containerName="dnsmasq-dns" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.360969 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.367885 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6"] Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.369635 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.371629 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.371894 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.378113 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.486763 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.486846 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.486907 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vz6\" (UniqueName: \"kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.487033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.590127 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.590208 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.590278 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vz6\" (UniqueName: \"kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.590411 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.598253 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.598284 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.598253 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.610834 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vz6\" (UniqueName: \"kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:19 crc kubenswrapper[4740]: I0130 16:25:19.701206 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:20 crc kubenswrapper[4740]: I0130 16:25:20.347467 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6"] Jan 30 16:25:21 crc kubenswrapper[4740]: I0130 16:25:21.258700 4740 generic.go:334] "Generic (PLEG): container finished" podID="1e1f0777-9068-4928-a4e8-971dfcbf905c" containerID="0f80ab613883905c3617266492f76388217941b68d0dc61a6e8fe503b18bbf03" exitCode=0 Jan 30 16:25:21 crc kubenswrapper[4740]: I0130 16:25:21.258848 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e1f0777-9068-4928-a4e8-971dfcbf905c","Type":"ContainerDied","Data":"0f80ab613883905c3617266492f76388217941b68d0dc61a6e8fe503b18bbf03"} Jan 30 16:25:21 crc kubenswrapper[4740]: I0130 16:25:21.264714 4740 generic.go:334] "Generic (PLEG): container finished" podID="83485d04-0a7f-45d0-9a43-66412e5e577e" containerID="0aecb375f291e3f079a609a62a9032b57665546c21b2fb75d530e420b13c129f" exitCode=0 Jan 30 16:25:21 crc kubenswrapper[4740]: I0130 16:25:21.264801 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83485d04-0a7f-45d0-9a43-66412e5e577e","Type":"ContainerDied","Data":"0aecb375f291e3f079a609a62a9032b57665546c21b2fb75d530e420b13c129f"} Jan 30 16:25:21 crc kubenswrapper[4740]: I0130 16:25:21.285311 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" event={"ID":"e8918d38-5722-4b5b-9b52-5a18971aa5f1","Type":"ContainerStarted","Data":"253afd54449dc515a939c85cdb92902add2d621e098cfbbe7bbc775565acec76"} Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.375685 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1e1f0777-9068-4928-a4e8-971dfcbf905c","Type":"ContainerStarted","Data":"7cfa917741b9b996499b944765550285f2aa87a6f6bb55f5c2cecd2b3c343c08"} Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.376541 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.379214 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83485d04-0a7f-45d0-9a43-66412e5e577e","Type":"ContainerStarted","Data":"8c7d02002fc09bcbc5b63adbbd3e3eb482dc8e18442e50c38afcc3d8b5361cc4"} Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.379562 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.410009 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.409975473 podStartE2EDuration="38.409975473s" podCreationTimestamp="2026-01-30 16:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:23.400526268 +0000 UTC m=+1772.037588867" watchObservedRunningTime="2026-01-30 16:25:23.409975473 +0000 UTC m=+1772.047038072" Jan 30 16:25:23 crc kubenswrapper[4740]: I0130 16:25:23.440562 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.440530131 podStartE2EDuration="39.440530131s" podCreationTimestamp="2026-01-30 16:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:25:23.426543194 +0000 UTC m=+1772.063605793" watchObservedRunningTime="2026-01-30 16:25:23.440530131 +0000 UTC m=+1772.077592730" Jan 30 16:25:26 crc kubenswrapper[4740]: I0130 16:25:26.335966 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:25:26 crc kubenswrapper[4740]: E0130 16:25:26.336721 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:25:26 crc kubenswrapper[4740]: I0130 16:25:26.401564 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Jan 30 16:25:32 crc kubenswrapper[4740]: I0130 16:25:32.312218 4740 scope.go:117] "RemoveContainer" containerID="ed761ba196c2364e32d286b0ca3a603491fac5dfda56bf0c5bc389d529ae1342" Jan 30 16:25:34 crc kubenswrapper[4740]: I0130 16:25:34.576681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" event={"ID":"e8918d38-5722-4b5b-9b52-5a18971aa5f1","Type":"ContainerStarted","Data":"8cddc81bbe7006dc31882d40cc7db9af93ea63d0fe2a88d026493877988281eb"} Jan 30 16:25:34 crc kubenswrapper[4740]: I0130 16:25:34.611167 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" podStartSLOduration=1.9468016769999998 podStartE2EDuration="15.611138054s" podCreationTimestamp="2026-01-30 16:25:19 +0000 UTC" firstStartedPulling="2026-01-30 16:25:20.357979582 +0000 UTC m=+1768.995042181" lastFinishedPulling="2026-01-30 16:25:34.022315959 +0000 UTC m=+1782.659378558" observedRunningTime="2026-01-30 16:25:34.600687364 +0000 UTC m=+1783.237749963" watchObservedRunningTime="2026-01-30 16:25:34.611138054 +0000 UTC m=+1783.248200653" Jan 30 16:25:35 crc kubenswrapper[4740]: I0130 16:25:35.804977 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="83485d04-0a7f-45d0-9a43-66412e5e577e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.239:5671: connect: connection refused" Jan 30 16:25:36 crc kubenswrapper[4740]: I0130 16:25:36.378300 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1e1f0777-9068-4928-a4e8-971dfcbf905c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5671: connect: connection refused" Jan 30 16:25:37 crc kubenswrapper[4740]: I0130 16:25:37.337111 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:25:37 crc kubenswrapper[4740]: E0130 16:25:37.337488 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:25:45 crc kubenswrapper[4740]: I0130 16:25:45.713332 4740 generic.go:334] "Generic (PLEG): container finished" podID="e8918d38-5722-4b5b-9b52-5a18971aa5f1" containerID="8cddc81bbe7006dc31882d40cc7db9af93ea63d0fe2a88d026493877988281eb" exitCode=0 Jan 30 16:25:45 crc kubenswrapper[4740]: I0130 16:25:45.713494 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" event={"ID":"e8918d38-5722-4b5b-9b52-5a18971aa5f1","Type":"ContainerDied","Data":"8cddc81bbe7006dc31882d40cc7db9af93ea63d0fe2a88d026493877988281eb"} Jan 30 16:25:45 crc kubenswrapper[4740]: I0130 16:25:45.803647 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 16:25:46 crc kubenswrapper[4740]: I0130 16:25:46.377704 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.323608 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.421494 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam\") pod \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.422509 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory\") pod \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.422791 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle\") pod \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.422836 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9vz6\" (UniqueName: \"kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6\") pod \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\" (UID: \"e8918d38-5722-4b5b-9b52-5a18971aa5f1\") " Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.429844 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6" (OuterVolumeSpecName: "kube-api-access-x9vz6") pod "e8918d38-5722-4b5b-9b52-5a18971aa5f1" (UID: "e8918d38-5722-4b5b-9b52-5a18971aa5f1"). InnerVolumeSpecName "kube-api-access-x9vz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.431724 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e8918d38-5722-4b5b-9b52-5a18971aa5f1" (UID: "e8918d38-5722-4b5b-9b52-5a18971aa5f1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.457067 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8918d38-5722-4b5b-9b52-5a18971aa5f1" (UID: "e8918d38-5722-4b5b-9b52-5a18971aa5f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.473563 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory" (OuterVolumeSpecName: "inventory") pod "e8918d38-5722-4b5b-9b52-5a18971aa5f1" (UID: "e8918d38-5722-4b5b-9b52-5a18971aa5f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.525590 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.525637 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9vz6\" (UniqueName: \"kubernetes.io/projected/e8918d38-5722-4b5b-9b52-5a18971aa5f1-kube-api-access-x9vz6\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.525652 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.525665 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8918d38-5722-4b5b-9b52-5a18971aa5f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.748084 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" event={"ID":"e8918d38-5722-4b5b-9b52-5a18971aa5f1","Type":"ContainerDied","Data":"253afd54449dc515a939c85cdb92902add2d621e098cfbbe7bbc775565acec76"} Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.748164 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253afd54449dc515a939c85cdb92902add2d621e098cfbbe7bbc775565acec76" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.748266 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.959612 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4"] Jan 30 16:25:47 crc kubenswrapper[4740]: E0130 16:25:47.960265 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8918d38-5722-4b5b-9b52-5a18971aa5f1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.960295 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8918d38-5722-4b5b-9b52-5a18971aa5f1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.960615 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8918d38-5722-4b5b-9b52-5a18971aa5f1" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.961662 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.965119 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.965385 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.965747 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.966313 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:25:47 crc kubenswrapper[4740]: I0130 16:25:47.979522 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4"] Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.036626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.037312 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.037487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnxx\" (UniqueName: \"kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.139994 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.140183 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.140243 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mnxx\" (UniqueName: \"kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.146006 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.146067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.173563 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mnxx\" (UniqueName: \"kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qwsf4\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.284566 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:48 crc kubenswrapper[4740]: I0130 16:25:48.929724 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4"] Jan 30 16:25:49 crc kubenswrapper[4740]: I0130 16:25:49.336984 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:25:49 crc kubenswrapper[4740]: E0130 16:25:49.337594 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:25:49 crc kubenswrapper[4740]: I0130 16:25:49.777182 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" event={"ID":"92f231c6-6140-49b3-89ba-65cf9472a1dd","Type":"ContainerStarted","Data":"675f3765493e16977e3f9f1c71dbc0606e5921e9badd7bdb13d432832fe7b0d5"} Jan 30 16:25:50 crc kubenswrapper[4740]: I0130 16:25:50.822757 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" event={"ID":"92f231c6-6140-49b3-89ba-65cf9472a1dd","Type":"ContainerStarted","Data":"bb9c6797079e1de1dc98dcb5deee69e77f1341a38cace6183f9e0cdb203ff0b3"} Jan 30 16:25:50 crc kubenswrapper[4740]: I0130 16:25:50.860832 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" podStartSLOduration=3.104061375 podStartE2EDuration="3.860805519s" podCreationTimestamp="2026-01-30 16:25:47 +0000 UTC" firstStartedPulling="2026-01-30 16:25:48.950659608 +0000 UTC m=+1797.587722207" lastFinishedPulling="2026-01-30 16:25:49.707403752 +0000 UTC m=+1798.344466351" observedRunningTime="2026-01-30 16:25:50.847222862 +0000 UTC m=+1799.484285471" watchObservedRunningTime="2026-01-30 16:25:50.860805519 +0000 UTC m=+1799.497868108" Jan 30 16:25:52 crc kubenswrapper[4740]: I0130 16:25:52.870642 4740 generic.go:334] "Generic (PLEG): container finished" podID="92f231c6-6140-49b3-89ba-65cf9472a1dd" containerID="bb9c6797079e1de1dc98dcb5deee69e77f1341a38cace6183f9e0cdb203ff0b3" exitCode=0 Jan 30 16:25:52 crc kubenswrapper[4740]: I0130 16:25:52.870743 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" event={"ID":"92f231c6-6140-49b3-89ba-65cf9472a1dd","Type":"ContainerDied","Data":"bb9c6797079e1de1dc98dcb5deee69e77f1341a38cace6183f9e0cdb203ff0b3"} Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.533758 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.642088 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam\") pod \"92f231c6-6140-49b3-89ba-65cf9472a1dd\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.642240 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mnxx\" (UniqueName: \"kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx\") pod \"92f231c6-6140-49b3-89ba-65cf9472a1dd\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.642491 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory\") pod \"92f231c6-6140-49b3-89ba-65cf9472a1dd\" (UID: \"92f231c6-6140-49b3-89ba-65cf9472a1dd\") " Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.650621 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx" (OuterVolumeSpecName: "kube-api-access-8mnxx") pod "92f231c6-6140-49b3-89ba-65cf9472a1dd" (UID: "92f231c6-6140-49b3-89ba-65cf9472a1dd"). InnerVolumeSpecName "kube-api-access-8mnxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.677542 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory" (OuterVolumeSpecName: "inventory") pod "92f231c6-6140-49b3-89ba-65cf9472a1dd" (UID: "92f231c6-6140-49b3-89ba-65cf9472a1dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.678864 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92f231c6-6140-49b3-89ba-65cf9472a1dd" (UID: "92f231c6-6140-49b3-89ba-65cf9472a1dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.755318 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.755370 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92f231c6-6140-49b3-89ba-65cf9472a1dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.755383 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mnxx\" (UniqueName: \"kubernetes.io/projected/92f231c6-6140-49b3-89ba-65cf9472a1dd-kube-api-access-8mnxx\") on node \"crc\" DevicePath \"\"" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.909421 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" event={"ID":"92f231c6-6140-49b3-89ba-65cf9472a1dd","Type":"ContainerDied","Data":"675f3765493e16977e3f9f1c71dbc0606e5921e9badd7bdb13d432832fe7b0d5"} Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.909887 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675f3765493e16977e3f9f1c71dbc0606e5921e9badd7bdb13d432832fe7b0d5" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.909514 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qwsf4" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.996827 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv"] Jan 30 16:25:54 crc kubenswrapper[4740]: E0130 16:25:54.997605 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f231c6-6140-49b3-89ba-65cf9472a1dd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.997635 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f231c6-6140-49b3-89ba-65cf9472a1dd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.998007 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f231c6-6140-49b3-89ba-65cf9472a1dd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 16:25:54 crc kubenswrapper[4740]: I0130 16:25:54.999253 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.002318 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.002640 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.003234 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.003426 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.007887 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv"] Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.063073 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.063199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.063296 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.063375 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jl8v\" (UniqueName: \"kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.166185 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.166390 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.166486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.166606 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jl8v\" (UniqueName: \"kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.173724 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.176195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.184160 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.198963 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jl8v\" (UniqueName: \"kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:55 crc kubenswrapper[4740]: I0130 16:25:55.316972 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:25:56 crc kubenswrapper[4740]: I0130 16:25:56.099739 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv"] Jan 30 16:25:56 crc kubenswrapper[4740]: I0130 16:25:56.936071 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" event={"ID":"1d25020c-4758-47af-a6c4-5c6cd3c1b74b","Type":"ContainerStarted","Data":"f292b23e7e3eba926ca49dd3f35204f8b910a8fd085978b593c7f3342df59278"} Jan 30 16:25:56 crc kubenswrapper[4740]: I0130 16:25:56.937014 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" event={"ID":"1d25020c-4758-47af-a6c4-5c6cd3c1b74b","Type":"ContainerStarted","Data":"094ae4165e84afeb565cd29c6fd1f12f68cf560a4f8fff5a979a81db6bbe0c00"} Jan 30 16:25:56 crc kubenswrapper[4740]: I0130 16:25:56.968876 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" podStartSLOduration=2.5333348559999997 podStartE2EDuration="2.968848842s" podCreationTimestamp="2026-01-30 16:25:54 +0000 UTC" firstStartedPulling="2026-01-30 16:25:56.09948991 +0000 UTC m=+1804.736552519" lastFinishedPulling="2026-01-30 16:25:56.535003896 +0000 UTC m=+1805.172066505" observedRunningTime="2026-01-30 16:25:56.95829675 +0000 UTC m=+1805.595359349" watchObservedRunningTime="2026-01-30 16:25:56.968848842 +0000 UTC m=+1805.605911441" Jan 30 16:26:02 crc kubenswrapper[4740]: I0130 16:26:02.336006 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:26:02 crc kubenswrapper[4740]: E0130 16:26:02.336942 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:26:17 crc kubenswrapper[4740]: I0130 16:26:17.336127 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:26:17 crc kubenswrapper[4740]: E0130 16:26:17.337193 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:26:29 crc kubenswrapper[4740]: I0130 16:26:29.336688 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:26:29 crc kubenswrapper[4740]: E0130 16:26:29.341298 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:26:34 crc kubenswrapper[4740]: I0130 16:26:34.197161 4740 scope.go:117] "RemoveContainer" containerID="ea2f769dfd823e14a3025458f20e3c3d13cbb63154a3e9ccf061e87d655e2f7a" Jan 30 16:26:41 crc kubenswrapper[4740]: I0130 16:26:41.336788 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:26:41 crc kubenswrapper[4740]: E0130 16:26:41.338114 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:26:52 crc kubenswrapper[4740]: I0130 16:26:52.336321 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:26:52 crc kubenswrapper[4740]: E0130 16:26:52.337250 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:27:02 crc kubenswrapper[4740]: I0130 16:27:02.057732 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7sbsg"] Jan 30 16:27:02 crc kubenswrapper[4740]: I0130 16:27:02.069757 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7sbsg"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.037691 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9806-account-create-update-vkjtb"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.050180 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lz7sh"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.060267 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-12bd-account-create-update-q7ffq"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.070628 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lz7sh"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.082532 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9806-account-create-update-vkjtb"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.098314 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-12bd-account-create-update-q7ffq"] Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.351391 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1625e274-251a-4381-920f-4633abfc7b93" path="/var/lib/kubelet/pods/1625e274-251a-4381-920f-4633abfc7b93/volumes" Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.353144 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="207e1134-f154-40c3-857f-5d3619c0843f" path="/var/lib/kubelet/pods/207e1134-f154-40c3-857f-5d3619c0843f/volumes" Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.354806 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75656825-bedd-47be-9ae0-fde600c6a745" path="/var/lib/kubelet/pods/75656825-bedd-47be-9ae0-fde600c6a745/volumes" Jan 30 16:27:03 crc kubenswrapper[4740]: I0130 16:27:03.356606 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab35fc86-fda3-45b5-84cf-f2651169ab1d" path="/var/lib/kubelet/pods/ab35fc86-fda3-45b5-84cf-f2651169ab1d/volumes" Jan 30 16:27:04 crc kubenswrapper[4740]: I0130 16:27:04.336805 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:27:04 crc kubenswrapper[4740]: E0130 16:27:04.337303 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.043961 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a2dd-account-create-update-vtbbj"] Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.062047 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-49lwq"] Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.079818 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a2dd-account-create-update-vtbbj"] Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.096747 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-49lwq"] Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.378703 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11a8409-8872-4bdb-8409-db5350a4b0c4" path="/var/lib/kubelet/pods/d11a8409-8872-4bdb-8409-db5350a4b0c4/volumes" Jan 30 16:27:09 crc kubenswrapper[4740]: I0130 16:27:09.379384 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1a729e-aa92-4658-b088-ec2b17042358" path="/var/lib/kubelet/pods/eb1a729e-aa92-4658-b088-ec2b17042358/volumes" Jan 30 16:27:16 crc kubenswrapper[4740]: I0130 16:27:16.336860 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:27:16 crc kubenswrapper[4740]: E0130 16:27:16.337977 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:27:28 crc kubenswrapper[4740]: I0130 16:27:28.335948 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:27:28 crc kubenswrapper[4740]: E0130 16:27:28.336788 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:27:33 crc kubenswrapper[4740]: I0130 16:27:33.053030 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-356a-account-create-update-clwmn"] Jan 30 16:27:33 crc kubenswrapper[4740]: I0130 16:27:33.070689 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-356a-account-create-update-clwmn"] Jan 30 16:27:33 crc kubenswrapper[4740]: I0130 16:27:33.350818 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca32814-086a-46da-8e0f-01bce2d6dde1" path="/var/lib/kubelet/pods/cca32814-086a-46da-8e0f-01bce2d6dde1/volumes" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.036133 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-x645n"] Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.049989 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-x645n"] Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.065824 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8x9s2"] Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.076841 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8x9s2"] Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.272442 4740 scope.go:117] "RemoveContainer" containerID="9e0b57fb064daac15565582f4323ff713888279e3da95984f8c2a379ecd8a760" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.318870 4740 scope.go:117] "RemoveContainer" containerID="df962fd43f1590b89be890575c3b7144ecea97a32ec3fedebb17077b5bea0303" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.385716 4740 scope.go:117] "RemoveContainer" containerID="7b8716b66b6dd003178ef05026d38158d63a601838db5ba3855fade119c6e359" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.442876 4740 scope.go:117] "RemoveContainer" containerID="6063fc1dfce776627159f3677a4c28877a1f6aeca439438b683e3c82f1cd71e8" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.494088 4740 scope.go:117] "RemoveContainer" containerID="f37896113c60c056cba4138962f86fffdc199d6d367eb0cfc2aa7fa30e9d1c22" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.562683 4740 scope.go:117] "RemoveContainer" containerID="23fdb5f0537a9f7470f3195a756b7f65733c6c5442093d71314d99ecfc2ec628" Jan 30 16:27:34 crc kubenswrapper[4740]: I0130 16:27:34.613682 4740 scope.go:117] "RemoveContainer" containerID="7de0206ebc729eaa887298c8fde824ffbfb8571562022395efd0341da50542a6" Jan 30 16:27:35 crc kubenswrapper[4740]: I0130 16:27:35.349826 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8c381a-3987-4702-b366-7ac197e0a1af" path="/var/lib/kubelet/pods/aa8c381a-3987-4702-b366-7ac197e0a1af/volumes" Jan 30 16:27:35 crc kubenswrapper[4740]: I0130 16:27:35.351002 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2508d2b-35c8-4f18-bcef-a5a4b6cb046f" path="/var/lib/kubelet/pods/f2508d2b-35c8-4f18-bcef-a5a4b6cb046f/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.063530 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-397f-account-create-update-zmhwn"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.079216 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-397f-account-create-update-zmhwn"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.100904 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v7ghb"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.117475 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-3feb-account-create-update-dk4rx"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.133795 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bd86c"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.148428 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ca3f-account-create-update-wd46b"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.163416 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-3feb-account-create-update-dk4rx"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.178404 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ca3f-account-create-update-wd46b"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.197049 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v7ghb"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.212244 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bd86c"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.224508 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-78p2v"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.236286 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-78p2v"] Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.351479 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21370d38-9663-4ffe-acb4-f009ebf39a66" path="/var/lib/kubelet/pods/21370d38-9663-4ffe-acb4-f009ebf39a66/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.354153 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fe8a82-8128-465d-8187-b0d997c6cd55" path="/var/lib/kubelet/pods/21fe8a82-8128-465d-8187-b0d997c6cd55/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.357830 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5e433e-38a7-4b0f-b95a-20a0c3229b56" path="/var/lib/kubelet/pods/7d5e433e-38a7-4b0f-b95a-20a0c3229b56/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.360020 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7c1d41-649f-4a15-aec6-e8e6af5032b7" path="/var/lib/kubelet/pods/7e7c1d41-649f-4a15-aec6-e8e6af5032b7/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.363208 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9cb2731-a1e5-444c-aa69-8a6c61e57cd5" path="/var/lib/kubelet/pods/b9cb2731-a1e5-444c-aa69-8a6c61e57cd5/volumes" Jan 30 16:27:39 crc kubenswrapper[4740]: I0130 16:27:39.365576 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e89928-c4f5-41ca-aea1-131fa654097d" path="/var/lib/kubelet/pods/f3e89928-c4f5-41ca-aea1-131fa654097d/volumes" Jan 30 16:27:42 crc kubenswrapper[4740]: I0130 16:27:42.337116 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:27:42 crc kubenswrapper[4740]: E0130 16:27:42.337878 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:27:57 crc kubenswrapper[4740]: I0130 16:27:57.339583 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:27:58 crc kubenswrapper[4740]: I0130 16:27:58.386650 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c"} Jan 30 16:28:03 crc kubenswrapper[4740]: I0130 16:28:03.056633 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9z8ns"] Jan 30 16:28:03 crc kubenswrapper[4740]: I0130 16:28:03.068685 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9z8ns"] Jan 30 16:28:03 crc kubenswrapper[4740]: I0130 16:28:03.355900 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b14301-3181-46f9-82ed-2d0ca6a44374" path="/var/lib/kubelet/pods/46b14301-3181-46f9-82ed-2d0ca6a44374/volumes" Jan 30 16:28:04 crc kubenswrapper[4740]: I0130 16:28:04.051986 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kb9rw"] Jan 30 16:28:04 crc kubenswrapper[4740]: I0130 16:28:04.069041 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kb9rw"] Jan 30 16:28:05 crc kubenswrapper[4740]: I0130 16:28:05.356486 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b01ab87-38ce-4839-ac41-038201f727f9" path="/var/lib/kubelet/pods/7b01ab87-38ce-4839-ac41-038201f727f9/volumes" Jan 30 16:28:34 crc kubenswrapper[4740]: I0130 16:28:34.809344 4740 scope.go:117] "RemoveContainer" containerID="e97acd5936a7b9c720194673f8de009e6e07a8a80c52d9b525c8c68a375ff34e" Jan 30 16:28:34 crc kubenswrapper[4740]: I0130 16:28:34.867044 4740 scope.go:117] "RemoveContainer" containerID="80c12805cd6aa6be8e84c8d96943505c86c74597d73c63c726ad1d65487bae34" Jan 30 16:28:34 crc kubenswrapper[4740]: I0130 16:28:34.953542 4740 scope.go:117] "RemoveContainer" containerID="d4b77445ada670e04e79ef77c9b25a96f16d88f1decc2990831f888eecf063db" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.021603 4740 scope.go:117] "RemoveContainer" containerID="3c35bb539c0af7148e74e52c998b23a715ff1848f9f6c817bbb0cf11cc1a3b83" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.061085 4740 scope.go:117] "RemoveContainer" containerID="0ee802a35a24582ba03c784b64b19c9689d896531710c700782985d90e411743" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.119334 4740 scope.go:117] "RemoveContainer" containerID="5728efea581ad5a415ac5bede63d154a9b94eb36868da1713c0bee6aa88e106f" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.171021 4740 scope.go:117] "RemoveContainer" containerID="a240866ede110ecb5a652b2c4933209aa1468ec9a06d696db15a4fdc73d6e57d" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.203615 4740 scope.go:117] "RemoveContainer" containerID="15acd226fef64fcc08f14ab32aa800c2d560714fa1b43eef82e1b9d1d08fd1fd" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.239597 4740 scope.go:117] "RemoveContainer" containerID="54bf3af07185705ec52d7ad900bbd9b5e90a774964d862b109deffe3c3d59962" Jan 30 16:28:35 crc kubenswrapper[4740]: I0130 16:28:35.291685 4740 scope.go:117] "RemoveContainer" containerID="8d158e8436777bd801de6e538a077ab9ca6b8092cbe750c9cc28d29f6240198d" Jan 30 16:29:04 crc kubenswrapper[4740]: I0130 16:29:04.259159 4740 generic.go:334] "Generic (PLEG): container finished" podID="1d25020c-4758-47af-a6c4-5c6cd3c1b74b" containerID="f292b23e7e3eba926ca49dd3f35204f8b910a8fd085978b593c7f3342df59278" exitCode=0 Jan 30 16:29:04 crc kubenswrapper[4740]: I0130 16:29:04.259314 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" event={"ID":"1d25020c-4758-47af-a6c4-5c6cd3c1b74b","Type":"ContainerDied","Data":"f292b23e7e3eba926ca49dd3f35204f8b910a8fd085978b593c7f3342df59278"} Jan 30 16:29:05 crc kubenswrapper[4740]: I0130 16:29:05.867216 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.002080 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory\") pod \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.002216 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle\") pod \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.002414 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam\") pod \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.002474 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jl8v\" (UniqueName: \"kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v\") pod \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\" (UID: \"1d25020c-4758-47af-a6c4-5c6cd3c1b74b\") " Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.014240 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v" (OuterVolumeSpecName: "kube-api-access-2jl8v") pod "1d25020c-4758-47af-a6c4-5c6cd3c1b74b" (UID: "1d25020c-4758-47af-a6c4-5c6cd3c1b74b"). InnerVolumeSpecName "kube-api-access-2jl8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.017540 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1d25020c-4758-47af-a6c4-5c6cd3c1b74b" (UID: "1d25020c-4758-47af-a6c4-5c6cd3c1b74b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.038516 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory" (OuterVolumeSpecName: "inventory") pod "1d25020c-4758-47af-a6c4-5c6cd3c1b74b" (UID: "1d25020c-4758-47af-a6c4-5c6cd3c1b74b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.063654 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d25020c-4758-47af-a6c4-5c6cd3c1b74b" (UID: "1d25020c-4758-47af-a6c4-5c6cd3c1b74b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.107358 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.108000 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.108061 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.108079 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jl8v\" (UniqueName: \"kubernetes.io/projected/1d25020c-4758-47af-a6c4-5c6cd3c1b74b-kube-api-access-2jl8v\") on node \"crc\" DevicePath \"\"" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.285224 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" event={"ID":"1d25020c-4758-47af-a6c4-5c6cd3c1b74b","Type":"ContainerDied","Data":"094ae4165e84afeb565cd29c6fd1f12f68cf560a4f8fff5a979a81db6bbe0c00"} Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.285312 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="094ae4165e84afeb565cd29c6fd1f12f68cf560a4f8fff5a979a81db6bbe0c00" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.285846 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.427705 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d"] Jan 30 16:29:06 crc kubenswrapper[4740]: E0130 16:29:06.428618 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d25020c-4758-47af-a6c4-5c6cd3c1b74b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.428642 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d25020c-4758-47af-a6c4-5c6cd3c1b74b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.429061 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d25020c-4758-47af-a6c4-5c6cd3c1b74b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.430479 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.439009 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.440064 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.440406 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.448343 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.461295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d"] Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.528337 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.528451 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.528621 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48q5v\" (UniqueName: \"kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.633459 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.633525 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.633600 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48q5v\" (UniqueName: \"kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.638934 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.639426 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.659315 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48q5v\" (UniqueName: \"kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:06 crc kubenswrapper[4740]: I0130 16:29:06.767407 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:29:07 crc kubenswrapper[4740]: I0130 16:29:07.306541 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d"] Jan 30 16:29:08 crc kubenswrapper[4740]: I0130 16:29:08.320035 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" event={"ID":"c63be956-8703-45e6-8b81-1867d602a2d8","Type":"ContainerStarted","Data":"a901c63e3c5daf45a7c3532edc09170e83211e8fd3021667fad68af5c58ba914"} Jan 30 16:29:08 crc kubenswrapper[4740]: I0130 16:29:08.320846 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" event={"ID":"c63be956-8703-45e6-8b81-1867d602a2d8","Type":"ContainerStarted","Data":"cb39e88fdc135a526958d151a8d163ec0b45769abde798e3e860880c13fd3714"} Jan 30 16:29:08 crc kubenswrapper[4740]: I0130 16:29:08.358522 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" podStartSLOduration=1.957472968 podStartE2EDuration="2.358489347s" podCreationTimestamp="2026-01-30 16:29:06 +0000 UTC" firstStartedPulling="2026-01-30 16:29:07.320256478 +0000 UTC m=+1995.957319077" lastFinishedPulling="2026-01-30 16:29:07.721272857 +0000 UTC m=+1996.358335456" observedRunningTime="2026-01-30 16:29:08.339868225 +0000 UTC m=+1996.976930884" watchObservedRunningTime="2026-01-30 16:29:08.358489347 +0000 UTC m=+1996.995551986" Jan 30 16:29:29 crc kubenswrapper[4740]: I0130 16:29:29.071921 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rq44b"] Jan 30 16:29:29 crc kubenswrapper[4740]: I0130 16:29:29.088520 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rq44b"] Jan 30 16:29:29 crc kubenswrapper[4740]: I0130 16:29:29.353084 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8f73dc-23d4-4221-b0a4-a76f2373e7b7" path="/var/lib/kubelet/pods/ab8f73dc-23d4-4221-b0a4-a76f2373e7b7/volumes" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.050142 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2k9q9"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.063085 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ltwk6"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.099605 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vhkg4"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.121168 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2k9q9"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.136437 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ltwk6"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.148878 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vhkg4"] Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.356647 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1c912a-97a6-4de7-ad45-ced02c0f40e5" path="/var/lib/kubelet/pods/cc1c912a-97a6-4de7-ad45-ced02c0f40e5/volumes" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.359785 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd120629-d064-4ce0-a5d2-73656425765f" path="/var/lib/kubelet/pods/cd120629-d064-4ce0-a5d2-73656425765f/volumes" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.362211 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68ec665-a90a-4332-8e78-79f658776815" path="/var/lib/kubelet/pods/e68ec665-a90a-4332-8e78-79f658776815/volumes" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.580035 4740 scope.go:117] "RemoveContainer" containerID="c1775948756214f31d782697a6482275f7f5e00820681667ea3e55feeea2dacd" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.660616 4740 scope.go:117] "RemoveContainer" containerID="98024c68ffaac917ede6a544df17eafb9a8539d6c1de2916e3234db2e97cd801" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.712788 4740 scope.go:117] "RemoveContainer" containerID="71e26fddcac7ac2d674d9e4181c99c52d0326d5ccdd828adf918194a7bf2f30d" Jan 30 16:29:35 crc kubenswrapper[4740]: I0130 16:29:35.757329 4740 scope.go:117] "RemoveContainer" containerID="746789ef9658790b905a92511842cb3dd5f1417a1a034f491db8cf9b203b0a98" Jan 30 16:29:41 crc kubenswrapper[4740]: I0130 16:29:41.048752 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pkfjm"] Jan 30 16:29:41 crc kubenswrapper[4740]: I0130 16:29:41.060333 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pkfjm"] Jan 30 16:29:41 crc kubenswrapper[4740]: I0130 16:29:41.353721 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2754b498-304b-47aa-a2d3-71a9c2f70e8e" path="/var/lib/kubelet/pods/2754b498-304b-47aa-a2d3-71a9c2f70e8e/volumes" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.161821 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz"] Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.164375 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.168259 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.168897 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.180757 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz"] Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.252470 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.253116 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw9p5\" (UniqueName: \"kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.253236 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.357097 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.357387 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.357637 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw9p5\" (UniqueName: \"kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.359617 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.372543 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.381462 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw9p5\" (UniqueName: \"kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5\") pod \"collect-profiles-29496510-dmpfz\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:00 crc kubenswrapper[4740]: I0130 16:30:00.555623 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:01 crc kubenswrapper[4740]: I0130 16:30:01.058703 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz"] Jan 30 16:30:02 crc kubenswrapper[4740]: I0130 16:30:02.005256 4740 generic.go:334] "Generic (PLEG): container finished" podID="e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" containerID="9681d2b6e2ead397d583382cb362f3270e511d79abe6d6f4c679c22925fbc0d3" exitCode=0 Jan 30 16:30:02 crc kubenswrapper[4740]: I0130 16:30:02.005566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" event={"ID":"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03","Type":"ContainerDied","Data":"9681d2b6e2ead397d583382cb362f3270e511d79abe6d6f4c679c22925fbc0d3"} Jan 30 16:30:02 crc kubenswrapper[4740]: I0130 16:30:02.007376 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" event={"ID":"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03","Type":"ContainerStarted","Data":"ff3f86161049330d35239f0601b00f0ea713d6db3805308f42caf6ae1a9888d9"} Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.481982 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.564046 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume\") pod \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.564266 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw9p5\" (UniqueName: \"kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5\") pod \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.564450 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume\") pod \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\" (UID: \"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03\") " Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.565303 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" (UID: "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.566638 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.570722 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5" (OuterVolumeSpecName: "kube-api-access-bw9p5") pod "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" (UID: "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03"). InnerVolumeSpecName "kube-api-access-bw9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.571109 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" (UID: "e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.668389 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:03 crc kubenswrapper[4740]: I0130 16:30:03.668435 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw9p5\" (UniqueName: \"kubernetes.io/projected/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03-kube-api-access-bw9p5\") on node \"crc\" DevicePath \"\"" Jan 30 16:30:04 crc kubenswrapper[4740]: I0130 16:30:04.060255 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" event={"ID":"e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03","Type":"ContainerDied","Data":"ff3f86161049330d35239f0601b00f0ea713d6db3805308f42caf6ae1a9888d9"} Jan 30 16:30:04 crc kubenswrapper[4740]: I0130 16:30:04.060336 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3f86161049330d35239f0601b00f0ea713d6db3805308f42caf6ae1a9888d9" Jan 30 16:30:04 crc kubenswrapper[4740]: I0130 16:30:04.060441 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz" Jan 30 16:30:04 crc kubenswrapper[4740]: I0130 16:30:04.597751 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t"] Jan 30 16:30:04 crc kubenswrapper[4740]: I0130 16:30:04.609279 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496465-r7p7t"] Jan 30 16:30:05 crc kubenswrapper[4740]: I0130 16:30:05.358854 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4be91ca-c1df-458b-b8da-29f713fefe22" path="/var/lib/kubelet/pods/b4be91ca-c1df-458b-b8da-29f713fefe22/volumes" Jan 30 16:30:24 crc kubenswrapper[4740]: I0130 16:30:24.455154 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:30:24 crc kubenswrapper[4740]: I0130 16:30:24.455919 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:30:35 crc kubenswrapper[4740]: I0130 16:30:35.918476 4740 scope.go:117] "RemoveContainer" containerID="1344ab6c54073eb0e098787b221c48c7db2b0e6b9a160e797a2cb2826f5bd461" Jan 30 16:30:35 crc kubenswrapper[4740]: I0130 16:30:35.964581 4740 scope.go:117] "RemoveContainer" containerID="4b76f876c40cbe9002b8b761ac6ec48d6d0794b158b9eb9f96785d0d672a8c9d" Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.055129 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qjb6j"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.064982 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4t7gm"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.077978 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8pn9g"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.091655 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qjb6j"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.100669 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7dff-account-create-update-kxgjd"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.112690 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-69a4-account-create-update-tnzls"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.125166 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-aad2-account-create-update-69g8c"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.137516 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4t7gm"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.152112 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7dff-account-create-update-kxgjd"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.159830 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8pn9g"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.170948 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-69a4-account-create-update-tnzls"] Jan 30 16:30:50 crc kubenswrapper[4740]: I0130 16:30:50.181901 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-aad2-account-create-update-69g8c"] Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.357301 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e660bb0-89ad-41e3-8b92-c57cdb00e15a" path="/var/lib/kubelet/pods/0e660bb0-89ad-41e3-8b92-c57cdb00e15a/volumes" Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.361787 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35414dff-66ea-4bb3-9a02-46c80f0822a8" path="/var/lib/kubelet/pods/35414dff-66ea-4bb3-9a02-46c80f0822a8/volumes" Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.364406 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eda7b5b-a45e-4aaa-a107-1b602beb6ed1" path="/var/lib/kubelet/pods/6eda7b5b-a45e-4aaa-a107-1b602beb6ed1/volumes" Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.365812 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e2940b-8a2d-4865-a312-a5f3b783f0b0" path="/var/lib/kubelet/pods/a7e2940b-8a2d-4865-a312-a5f3b783f0b0/volumes" Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.367402 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99efeb7-303a-444e-8427-8c5613d8bc65" path="/var/lib/kubelet/pods/b99efeb7-303a-444e-8427-8c5613d8bc65/volumes" Jan 30 16:30:51 crc kubenswrapper[4740]: I0130 16:30:51.370126 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfcfe3a-498e-4604-a5ca-97a951b24573" path="/var/lib/kubelet/pods/cbfcfe3a-498e-4604-a5ca-97a951b24573/volumes" Jan 30 16:30:54 crc kubenswrapper[4740]: I0130 16:30:54.454806 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:30:54 crc kubenswrapper[4740]: I0130 16:30:54.455196 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:31:03 crc kubenswrapper[4740]: I0130 16:31:03.805909 4740 generic.go:334] "Generic (PLEG): container finished" podID="c63be956-8703-45e6-8b81-1867d602a2d8" containerID="a901c63e3c5daf45a7c3532edc09170e83211e8fd3021667fad68af5c58ba914" exitCode=0 Jan 30 16:31:03 crc kubenswrapper[4740]: I0130 16:31:03.806041 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" event={"ID":"c63be956-8703-45e6-8b81-1867d602a2d8","Type":"ContainerDied","Data":"a901c63e3c5daf45a7c3532edc09170e83211e8fd3021667fad68af5c58ba914"} Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.501900 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.638328 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory\") pod \"c63be956-8703-45e6-8b81-1867d602a2d8\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.638723 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48q5v\" (UniqueName: \"kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v\") pod \"c63be956-8703-45e6-8b81-1867d602a2d8\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.638892 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam\") pod \"c63be956-8703-45e6-8b81-1867d602a2d8\" (UID: \"c63be956-8703-45e6-8b81-1867d602a2d8\") " Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.649310 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v" (OuterVolumeSpecName: "kube-api-access-48q5v") pod "c63be956-8703-45e6-8b81-1867d602a2d8" (UID: "c63be956-8703-45e6-8b81-1867d602a2d8"). InnerVolumeSpecName "kube-api-access-48q5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.677780 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c63be956-8703-45e6-8b81-1867d602a2d8" (UID: "c63be956-8703-45e6-8b81-1867d602a2d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.680532 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory" (OuterVolumeSpecName: "inventory") pod "c63be956-8703-45e6-8b81-1867d602a2d8" (UID: "c63be956-8703-45e6-8b81-1867d602a2d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.741718 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.741773 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48q5v\" (UniqueName: \"kubernetes.io/projected/c63be956-8703-45e6-8b81-1867d602a2d8-kube-api-access-48q5v\") on node \"crc\" DevicePath \"\"" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.741790 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c63be956-8703-45e6-8b81-1867d602a2d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.853660 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" event={"ID":"c63be956-8703-45e6-8b81-1867d602a2d8","Type":"ContainerDied","Data":"cb39e88fdc135a526958d151a8d163ec0b45769abde798e3e860880c13fd3714"} Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.853731 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb39e88fdc135a526958d151a8d163ec0b45769abde798e3e860880c13fd3714" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.853864 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.952505 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r"] Jan 30 16:31:05 crc kubenswrapper[4740]: E0130 16:31:05.953286 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63be956-8703-45e6-8b81-1867d602a2d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.953675 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63be956-8703-45e6-8b81-1867d602a2d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 16:31:05 crc kubenswrapper[4740]: E0130 16:31:05.953728 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" containerName="collect-profiles" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.953738 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" containerName="collect-profiles" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.953996 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" containerName="collect-profiles" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.954030 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63be956-8703-45e6-8b81-1867d602a2d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.955182 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.957762 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.958657 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.959309 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.962677 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r"] Jan 30 16:31:05 crc kubenswrapper[4740]: I0130 16:31:05.965683 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.079051 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmj7\" (UniqueName: \"kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.079218 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.079379 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.181583 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.181742 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.181827 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmj7\" (UniqueName: \"kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.186445 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.191220 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.201606 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmj7\" (UniqueName: \"kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.276271 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.897240 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r"] Jan 30 16:31:06 crc kubenswrapper[4740]: I0130 16:31:06.922328 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:31:07 crc kubenswrapper[4740]: I0130 16:31:07.874816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" event={"ID":"2cf84dba-a4e6-413f-a6d5-81779c179d30","Type":"ContainerStarted","Data":"1c92de1754bf5abfee1d98154bce03de207fc5755ff636518adfaa00c83b7488"} Jan 30 16:31:07 crc kubenswrapper[4740]: I0130 16:31:07.875621 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" event={"ID":"2cf84dba-a4e6-413f-a6d5-81779c179d30","Type":"ContainerStarted","Data":"8b2bbdabb2e2fefa7622e73763cea5641fe892d40ca9f621aa04c781e9bd29c3"} Jan 30 16:31:07 crc kubenswrapper[4740]: I0130 16:31:07.897246 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" podStartSLOduration=2.280911697 podStartE2EDuration="2.897210857s" podCreationTimestamp="2026-01-30 16:31:05 +0000 UTC" firstStartedPulling="2026-01-30 16:31:06.921914132 +0000 UTC m=+2115.558976731" lastFinishedPulling="2026-01-30 16:31:07.538213292 +0000 UTC m=+2116.175275891" observedRunningTime="2026-01-30 16:31:07.891400873 +0000 UTC m=+2116.528463462" watchObservedRunningTime="2026-01-30 16:31:07.897210857 +0000 UTC m=+2116.534273456" Jan 30 16:31:24 crc kubenswrapper[4740]: I0130 16:31:24.454687 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:31:24 crc kubenswrapper[4740]: I0130 16:31:24.455776 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:31:24 crc kubenswrapper[4740]: I0130 16:31:24.455854 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:31:24 crc kubenswrapper[4740]: I0130 16:31:24.457059 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:31:24 crc kubenswrapper[4740]: I0130 16:31:24.457123 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c" gracePeriod=600 Jan 30 16:31:25 crc kubenswrapper[4740]: I0130 16:31:25.120993 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c" exitCode=0 Jan 30 16:31:25 crc kubenswrapper[4740]: I0130 16:31:25.121544 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c"} Jan 30 16:31:25 crc kubenswrapper[4740]: I0130 16:31:25.121617 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100"} Jan 30 16:31:25 crc kubenswrapper[4740]: I0130 16:31:25.121644 4740 scope.go:117] "RemoveContainer" containerID="a42c911cc2fee800bd1b118e4b26ff0eb9418aff50378d310e7b634559667d97" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.083148 4740 scope.go:117] "RemoveContainer" containerID="14e9679d70a170957aa301054b35813fde673c65a415212b93d0ee02212b7385" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.132662 4740 scope.go:117] "RemoveContainer" containerID="426d00a39ed7cbcfb518804feac673fe151df010066879a5ba2ca3c032ca7f2c" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.215862 4740 scope.go:117] "RemoveContainer" containerID="9e68fc51326d234de6d59f66d4ebf21c6d69e87d179363a28d118bbe8c159d6a" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.270723 4740 scope.go:117] "RemoveContainer" containerID="286cb5759262c2cd4647cdd155e1c1f4c5a006dcc9984e672168f649fc36abf2" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.338668 4740 scope.go:117] "RemoveContainer" containerID="1f8aadcf72df2f6526b5f684e17833291def67ee4547f64acc7f033b2810bef0" Jan 30 16:31:36 crc kubenswrapper[4740]: I0130 16:31:36.426033 4740 scope.go:117] "RemoveContainer" containerID="ed468d6ec953cfa279f095debd149975043b9ad5131f401a04d04777f2d3bab7" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.538019 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.542183 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.551636 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.693364 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.693473 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.693495 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf2n\" (UniqueName: \"kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.796701 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.796791 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrf2n\" (UniqueName: \"kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.797002 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.797468 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.797576 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.821557 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrf2n\" (UniqueName: \"kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n\") pod \"redhat-operators-pfl2s\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:50 crc kubenswrapper[4740]: I0130 16:31:50.870222 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:31:51 crc kubenswrapper[4740]: I0130 16:31:51.458532 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:31:51 crc kubenswrapper[4740]: I0130 16:31:51.588792 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerStarted","Data":"514e37c7786031f639030d5face3336e41e22a9537d3d764fa0055818f298df8"} Jan 30 16:31:52 crc kubenswrapper[4740]: I0130 16:31:52.600730 4740 generic.go:334] "Generic (PLEG): container finished" podID="add8ae61-51c0-42b2-9b75-26107df23416" containerID="1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f" exitCode=0 Jan 30 16:31:52 crc kubenswrapper[4740]: I0130 16:31:52.600893 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerDied","Data":"1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f"} Jan 30 16:31:54 crc kubenswrapper[4740]: I0130 16:31:54.626183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerStarted","Data":"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2"} Jan 30 16:32:04 crc kubenswrapper[4740]: I0130 16:32:04.735024 4740 generic.go:334] "Generic (PLEG): container finished" podID="add8ae61-51c0-42b2-9b75-26107df23416" containerID="7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2" exitCode=0 Jan 30 16:32:04 crc kubenswrapper[4740]: I0130 16:32:04.735145 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerDied","Data":"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2"} Jan 30 16:32:05 crc kubenswrapper[4740]: I0130 16:32:05.749309 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerStarted","Data":"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667"} Jan 30 16:32:05 crc kubenswrapper[4740]: I0130 16:32:05.777190 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pfl2s" podStartSLOduration=3.241703546 podStartE2EDuration="15.777161137s" podCreationTimestamp="2026-01-30 16:31:50 +0000 UTC" firstStartedPulling="2026-01-30 16:31:52.603095182 +0000 UTC m=+2161.240157781" lastFinishedPulling="2026-01-30 16:32:05.138552773 +0000 UTC m=+2173.775615372" observedRunningTime="2026-01-30 16:32:05.771452355 +0000 UTC m=+2174.408514994" watchObservedRunningTime="2026-01-30 16:32:05.777161137 +0000 UTC m=+2174.414223736" Jan 30 16:32:10 crc kubenswrapper[4740]: I0130 16:32:10.870937 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:10 crc kubenswrapper[4740]: I0130 16:32:10.871766 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:11 crc kubenswrapper[4740]: I0130 16:32:11.979252 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pfl2s" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="registry-server" probeResult="failure" output=< Jan 30 16:32:11 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:32:11 crc kubenswrapper[4740]: > Jan 30 16:32:12 crc kubenswrapper[4740]: I0130 16:32:12.843564 4740 generic.go:334] "Generic (PLEG): container finished" podID="2cf84dba-a4e6-413f-a6d5-81779c179d30" containerID="1c92de1754bf5abfee1d98154bce03de207fc5755ff636518adfaa00c83b7488" exitCode=0 Jan 30 16:32:12 crc kubenswrapper[4740]: I0130 16:32:12.844166 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" event={"ID":"2cf84dba-a4e6-413f-a6d5-81779c179d30","Type":"ContainerDied","Data":"1c92de1754bf5abfee1d98154bce03de207fc5755ff636518adfaa00c83b7488"} Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.488678 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.644329 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory\") pod \"2cf84dba-a4e6-413f-a6d5-81779c179d30\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.644669 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam\") pod \"2cf84dba-a4e6-413f-a6d5-81779c179d30\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.644890 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmj7\" (UniqueName: \"kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7\") pod \"2cf84dba-a4e6-413f-a6d5-81779c179d30\" (UID: \"2cf84dba-a4e6-413f-a6d5-81779c179d30\") " Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.659661 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7" (OuterVolumeSpecName: "kube-api-access-wlmj7") pod "2cf84dba-a4e6-413f-a6d5-81779c179d30" (UID: "2cf84dba-a4e6-413f-a6d5-81779c179d30"). InnerVolumeSpecName "kube-api-access-wlmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.700400 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory" (OuterVolumeSpecName: "inventory") pod "2cf84dba-a4e6-413f-a6d5-81779c179d30" (UID: "2cf84dba-a4e6-413f-a6d5-81779c179d30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.734997 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cf84dba-a4e6-413f-a6d5-81779c179d30" (UID: "2cf84dba-a4e6-413f-a6d5-81779c179d30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.748230 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.748282 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmj7\" (UniqueName: \"kubernetes.io/projected/2cf84dba-a4e6-413f-a6d5-81779c179d30-kube-api-access-wlmj7\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.748293 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf84dba-a4e6-413f-a6d5-81779c179d30-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.887795 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" event={"ID":"2cf84dba-a4e6-413f-a6d5-81779c179d30","Type":"ContainerDied","Data":"8b2bbdabb2e2fefa7622e73763cea5641fe892d40ca9f621aa04c781e9bd29c3"} Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.887849 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b2bbdabb2e2fefa7622e73763cea5641fe892d40ca9f621aa04c781e9bd29c3" Jan 30 16:32:14 crc kubenswrapper[4740]: I0130 16:32:14.887894 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.001334 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b"] Jan 30 16:32:15 crc kubenswrapper[4740]: E0130 16:32:15.002292 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84dba-a4e6-413f-a6d5-81779c179d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.002317 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84dba-a4e6-413f-a6d5-81779c179d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.002582 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf84dba-a4e6-413f-a6d5-81779c179d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.005010 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.010342 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.010395 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.010603 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.010795 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.025617 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b"] Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.161686 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.161852 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.163573 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsd7k\" (UniqueName: \"kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.266230 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.266323 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.266444 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsd7k\" (UniqueName: \"kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.272833 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.272896 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.289193 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsd7k\" (UniqueName: \"kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.326048 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:15 crc kubenswrapper[4740]: I0130 16:32:15.974382 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b"] Jan 30 16:32:16 crc kubenswrapper[4740]: I0130 16:32:16.916210 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" event={"ID":"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3","Type":"ContainerStarted","Data":"26136cd16d26e30bc74d38f36d0c98faa760ca495617a5e74d9bc4b42ac7d5ea"} Jan 30 16:32:16 crc kubenswrapper[4740]: I0130 16:32:16.916775 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" event={"ID":"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3","Type":"ContainerStarted","Data":"519324affd689a4bf4f32e1bcfa4c57a39d3a1f2fc30a546448e8ff06d05222d"} Jan 30 16:32:16 crc kubenswrapper[4740]: I0130 16:32:16.945967 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" podStartSLOduration=2.368511842 podStartE2EDuration="2.945940613s" podCreationTimestamp="2026-01-30 16:32:14 +0000 UTC" firstStartedPulling="2026-01-30 16:32:15.977247824 +0000 UTC m=+2184.614310423" lastFinishedPulling="2026-01-30 16:32:16.554676595 +0000 UTC m=+2185.191739194" observedRunningTime="2026-01-30 16:32:16.938652401 +0000 UTC m=+2185.575715000" watchObservedRunningTime="2026-01-30 16:32:16.945940613 +0000 UTC m=+2185.583003222" Jan 30 16:32:20 crc kubenswrapper[4740]: I0130 16:32:20.918640 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:21 crc kubenswrapper[4740]: I0130 16:32:21.003090 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:21 crc kubenswrapper[4740]: I0130 16:32:21.741843 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:32:22 crc kubenswrapper[4740]: I0130 16:32:22.294250 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pfl2s" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="registry-server" containerID="cri-o://004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667" gracePeriod=2 Jan 30 16:32:22 crc kubenswrapper[4740]: E0130 16:32:22.671739 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd8ae61_51c0_42b2_9b75_26107df23416.slice/crio-conmon-004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:32:22 crc kubenswrapper[4740]: I0130 16:32:22.965094 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.010868 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrf2n\" (UniqueName: \"kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n\") pod \"add8ae61-51c0-42b2-9b75-26107df23416\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.011456 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content\") pod \"add8ae61-51c0-42b2-9b75-26107df23416\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.011785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities\") pod \"add8ae61-51c0-42b2-9b75-26107df23416\" (UID: \"add8ae61-51c0-42b2-9b75-26107df23416\") " Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.013870 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities" (OuterVolumeSpecName: "utilities") pod "add8ae61-51c0-42b2-9b75-26107df23416" (UID: "add8ae61-51c0-42b2-9b75-26107df23416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.032642 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n" (OuterVolumeSpecName: "kube-api-access-rrf2n") pod "add8ae61-51c0-42b2-9b75-26107df23416" (UID: "add8ae61-51c0-42b2-9b75-26107df23416"). InnerVolumeSpecName "kube-api-access-rrf2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.114904 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrf2n\" (UniqueName: \"kubernetes.io/projected/add8ae61-51c0-42b2-9b75-26107df23416-kube-api-access-rrf2n\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.115246 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.164965 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "add8ae61-51c0-42b2-9b75-26107df23416" (UID: "add8ae61-51c0-42b2-9b75-26107df23416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.217966 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add8ae61-51c0-42b2-9b75-26107df23416-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.306727 4740 generic.go:334] "Generic (PLEG): container finished" podID="add8ae61-51c0-42b2-9b75-26107df23416" containerID="004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667" exitCode=0 Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.306829 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerDied","Data":"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667"} Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.306854 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfl2s" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.306895 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfl2s" event={"ID":"add8ae61-51c0-42b2-9b75-26107df23416","Type":"ContainerDied","Data":"514e37c7786031f639030d5face3336e41e22a9537d3d764fa0055818f298df8"} Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.306928 4740 scope.go:117] "RemoveContainer" containerID="004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.310191 4740 generic.go:334] "Generic (PLEG): container finished" podID="a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" containerID="26136cd16d26e30bc74d38f36d0c98faa760ca495617a5e74d9bc4b42ac7d5ea" exitCode=0 Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.310252 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" event={"ID":"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3","Type":"ContainerDied","Data":"26136cd16d26e30bc74d38f36d0c98faa760ca495617a5e74d9bc4b42ac7d5ea"} Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.343819 4740 scope.go:117] "RemoveContainer" containerID="7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.390366 4740 scope.go:117] "RemoveContainer" containerID="1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.396022 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.406461 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pfl2s"] Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.436758 4740 scope.go:117] "RemoveContainer" containerID="004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667" Jan 30 16:32:23 crc kubenswrapper[4740]: E0130 16:32:23.437435 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667\": container with ID starting with 004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667 not found: ID does not exist" containerID="004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.437554 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667"} err="failed to get container status \"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667\": rpc error: code = NotFound desc = could not find container \"004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667\": container with ID starting with 004510494a66dc7b48499637e50b325a5da6e37ce5c971d4722b6f8aebd53667 not found: ID does not exist" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.437644 4740 scope.go:117] "RemoveContainer" containerID="7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2" Jan 30 16:32:23 crc kubenswrapper[4740]: E0130 16:32:23.438535 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2\": container with ID starting with 7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2 not found: ID does not exist" containerID="7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.439186 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2"} err="failed to get container status \"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2\": rpc error: code = NotFound desc = could not find container \"7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2\": container with ID starting with 7a25510b213f6d9e6a10c87eff85d842a6b50f2d5f71977f1830370d22fbc2c2 not found: ID does not exist" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.439233 4740 scope.go:117] "RemoveContainer" containerID="1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f" Jan 30 16:32:23 crc kubenswrapper[4740]: E0130 16:32:23.441764 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f\": container with ID starting with 1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f not found: ID does not exist" containerID="1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f" Jan 30 16:32:23 crc kubenswrapper[4740]: I0130 16:32:23.441856 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f"} err="failed to get container status \"1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f\": rpc error: code = NotFound desc = could not find container \"1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f\": container with ID starting with 1bdb72b044758bc9015c0b8ab128b2bd5ad6d3a10abd6ec56e7081871985ba7f not found: ID does not exist" Jan 30 16:32:24 crc kubenswrapper[4740]: I0130 16:32:24.928171 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.075155 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam\") pod \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.076694 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory\") pod \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.076882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsd7k\" (UniqueName: \"kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k\") pod \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\" (UID: \"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3\") " Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.085402 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k" (OuterVolumeSpecName: "kube-api-access-gsd7k") pod "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" (UID: "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3"). InnerVolumeSpecName "kube-api-access-gsd7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.121942 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" (UID: "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.128064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory" (OuterVolumeSpecName: "inventory") pod "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" (UID: "a5fa2ffd-a5ba-47a0-a095-bd8219667aa3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.182710 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsd7k\" (UniqueName: \"kubernetes.io/projected/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-kube-api-access-gsd7k\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.182976 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.182990 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a5fa2ffd-a5ba-47a0-a095-bd8219667aa3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.333862 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" event={"ID":"a5fa2ffd-a5ba-47a0-a095-bd8219667aa3","Type":"ContainerDied","Data":"519324affd689a4bf4f32e1bcfa4c57a39d3a1f2fc30a546448e8ff06d05222d"} Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.333912 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.333916 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519324affd689a4bf4f32e1bcfa4c57a39d3a1f2fc30a546448e8ff06d05222d" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.351095 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add8ae61-51c0-42b2-9b75-26107df23416" path="/var/lib/kubelet/pods/add8ae61-51c0-42b2-9b75-26107df23416/volumes" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.451311 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm"] Jan 30 16:32:25 crc kubenswrapper[4740]: E0130 16:32:25.452064 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="extract-content" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452086 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="extract-content" Jan 30 16:32:25 crc kubenswrapper[4740]: E0130 16:32:25.452103 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="extract-utilities" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452113 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="extract-utilities" Jan 30 16:32:25 crc kubenswrapper[4740]: E0130 16:32:25.452139 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="registry-server" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452145 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="registry-server" Jan 30 16:32:25 crc kubenswrapper[4740]: E0130 16:32:25.452165 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452176 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452466 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="add8ae61-51c0-42b2-9b75-26107df23416" containerName="registry-server" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.452496 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fa2ffd-a5ba-47a0-a095-bd8219667aa3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.453741 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.457860 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.458671 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.459091 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.465052 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.468643 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm"] Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.595005 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtgr\" (UniqueName: \"kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.595838 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.596132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.698621 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.698735 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.698911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtgr\" (UniqueName: \"kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.703082 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.703852 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.719155 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtgr\" (UniqueName: \"kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4bsfm\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:25 crc kubenswrapper[4740]: I0130 16:32:25.783267 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:32:26 crc kubenswrapper[4740]: I0130 16:32:26.401615 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm"] Jan 30 16:32:27 crc kubenswrapper[4740]: I0130 16:32:27.366491 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" event={"ID":"98c07536-da6e-495d-8148-949896f2b4e3","Type":"ContainerStarted","Data":"672b5f096b8721f0351ef906b77eea6ea1a1648455bc52b3004002af95bc8b2e"} Jan 30 16:32:27 crc kubenswrapper[4740]: I0130 16:32:27.367104 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" event={"ID":"98c07536-da6e-495d-8148-949896f2b4e3","Type":"ContainerStarted","Data":"e77f486d6e8a047d6c495a2467011cb3e74b29f204c37bde967b8070bb4bd46b"} Jan 30 16:32:27 crc kubenswrapper[4740]: I0130 16:32:27.415004 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" podStartSLOduration=1.76697562 podStartE2EDuration="2.414981557s" podCreationTimestamp="2026-01-30 16:32:25 +0000 UTC" firstStartedPulling="2026-01-30 16:32:26.412467817 +0000 UTC m=+2195.049530426" lastFinishedPulling="2026-01-30 16:32:27.060473734 +0000 UTC m=+2195.697536363" observedRunningTime="2026-01-30 16:32:27.407746767 +0000 UTC m=+2196.044809366" watchObservedRunningTime="2026-01-30 16:32:27.414981557 +0000 UTC m=+2196.052044156" Jan 30 16:32:36 crc kubenswrapper[4740]: I0130 16:32:36.104828 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kktg4"] Jan 30 16:32:36 crc kubenswrapper[4740]: I0130 16:32:36.130731 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kktg4"] Jan 30 16:32:37 crc kubenswrapper[4740]: I0130 16:32:37.347806 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5a28f7-a3dd-4812-af52-97f58641116a" path="/var/lib/kubelet/pods/7b5a28f7-a3dd-4812-af52-97f58641116a/volumes" Jan 30 16:33:02 crc kubenswrapper[4740]: I0130 16:33:02.811200 4740 generic.go:334] "Generic (PLEG): container finished" podID="98c07536-da6e-495d-8148-949896f2b4e3" containerID="672b5f096b8721f0351ef906b77eea6ea1a1648455bc52b3004002af95bc8b2e" exitCode=0 Jan 30 16:33:02 crc kubenswrapper[4740]: I0130 16:33:02.811302 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" event={"ID":"98c07536-da6e-495d-8148-949896f2b4e3","Type":"ContainerDied","Data":"672b5f096b8721f0351ef906b77eea6ea1a1648455bc52b3004002af95bc8b2e"} Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.474589 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.641646 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbtgr\" (UniqueName: \"kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr\") pod \"98c07536-da6e-495d-8148-949896f2b4e3\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.641770 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory\") pod \"98c07536-da6e-495d-8148-949896f2b4e3\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.641956 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam\") pod \"98c07536-da6e-495d-8148-949896f2b4e3\" (UID: \"98c07536-da6e-495d-8148-949896f2b4e3\") " Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.650623 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr" (OuterVolumeSpecName: "kube-api-access-dbtgr") pod "98c07536-da6e-495d-8148-949896f2b4e3" (UID: "98c07536-da6e-495d-8148-949896f2b4e3"). InnerVolumeSpecName "kube-api-access-dbtgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.680825 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory" (OuterVolumeSpecName: "inventory") pod "98c07536-da6e-495d-8148-949896f2b4e3" (UID: "98c07536-da6e-495d-8148-949896f2b4e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.689940 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "98c07536-da6e-495d-8148-949896f2b4e3" (UID: "98c07536-da6e-495d-8148-949896f2b4e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.746271 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.746315 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbtgr\" (UniqueName: \"kubernetes.io/projected/98c07536-da6e-495d-8148-949896f2b4e3-kube-api-access-dbtgr\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.746334 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98c07536-da6e-495d-8148-949896f2b4e3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.838321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" event={"ID":"98c07536-da6e-495d-8148-949896f2b4e3","Type":"ContainerDied","Data":"e77f486d6e8a047d6c495a2467011cb3e74b29f204c37bde967b8070bb4bd46b"} Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.838415 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77f486d6e8a047d6c495a2467011cb3e74b29f204c37bde967b8070bb4bd46b" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.838685 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4bsfm" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.968858 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw"] Jan 30 16:33:04 crc kubenswrapper[4740]: E0130 16:33:04.969502 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c07536-da6e-495d-8148-949896f2b4e3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.969523 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c07536-da6e-495d-8148-949896f2b4e3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.969755 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c07536-da6e-495d-8148-949896f2b4e3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.970688 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.989045 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.989791 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.990045 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:33:04 crc kubenswrapper[4740]: I0130 16:33:04.990286 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.007621 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw"] Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.064269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.064358 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.064519 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sj68\" (UniqueName: \"kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.170586 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sj68\" (UniqueName: \"kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.204320 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.204471 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.211825 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-98dhw"] Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.223264 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.234965 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.249680 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sj68\" (UniqueName: \"kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.273489 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-98dhw"] Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.303747 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.354635 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c239553d-1e26-46d3-9487-17a11ad18ad9" path="/var/lib/kubelet/pods/c239553d-1e26-46d3-9487-17a11ad18ad9/volumes" Jan 30 16:33:05 crc kubenswrapper[4740]: I0130 16:33:05.916411 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw"] Jan 30 16:33:05 crc kubenswrapper[4740]: W0130 16:33:05.927869 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb913a0e7_afaa_4afb_9520_7930587f3b2f.slice/crio-69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147 WatchSource:0}: Error finding container 69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147: Status 404 returned error can't find the container with id 69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147 Jan 30 16:33:06 crc kubenswrapper[4740]: I0130 16:33:06.865939 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" event={"ID":"b913a0e7-afaa-4afb-9520-7930587f3b2f","Type":"ContainerStarted","Data":"62cc72c9ad26d09acc1a92eaa02fcafe04cc398ff5e39bdea78dbfcf8520ad93"} Jan 30 16:33:06 crc kubenswrapper[4740]: I0130 16:33:06.866467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" event={"ID":"b913a0e7-afaa-4afb-9520-7930587f3b2f","Type":"ContainerStarted","Data":"69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147"} Jan 30 16:33:06 crc kubenswrapper[4740]: I0130 16:33:06.900237 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" podStartSLOduration=2.446297142 podStartE2EDuration="2.899159732s" podCreationTimestamp="2026-01-30 16:33:04 +0000 UTC" firstStartedPulling="2026-01-30 16:33:05.931639383 +0000 UTC m=+2234.568701982" lastFinishedPulling="2026-01-30 16:33:06.384501973 +0000 UTC m=+2235.021564572" observedRunningTime="2026-01-30 16:33:06.888100457 +0000 UTC m=+2235.525163056" watchObservedRunningTime="2026-01-30 16:33:06.899159732 +0000 UTC m=+2235.536222331" Jan 30 16:33:09 crc kubenswrapper[4740]: I0130 16:33:09.047025 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-88k28"] Jan 30 16:33:09 crc kubenswrapper[4740]: I0130 16:33:09.064838 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-88k28"] Jan 30 16:33:09 crc kubenswrapper[4740]: I0130 16:33:09.354263 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df10ecb6-43ad-404c-b51c-b64913bab019" path="/var/lib/kubelet/pods/df10ecb6-43ad-404c-b51c-b64913bab019/volumes" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.324479 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.330636 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.385139 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.457380 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.457460 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv44s\" (UniqueName: \"kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.457798 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.560272 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.560557 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.560589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv44s\" (UniqueName: \"kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.561020 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.561151 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.582431 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv44s\" (UniqueName: \"kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s\") pod \"certified-operators-9tpgj\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:27 crc kubenswrapper[4740]: I0130 16:33:27.670209 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:28 crc kubenswrapper[4740]: I0130 16:33:28.239395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:29 crc kubenswrapper[4740]: I0130 16:33:29.126486 4740 generic.go:334] "Generic (PLEG): container finished" podID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerID="a0ae0f051f75549a7a1e85995ba8c02418b7d90328e1ec02cf23c08c1d0642b4" exitCode=0 Jan 30 16:33:29 crc kubenswrapper[4740]: I0130 16:33:29.126555 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerDied","Data":"a0ae0f051f75549a7a1e85995ba8c02418b7d90328e1ec02cf23c08c1d0642b4"} Jan 30 16:33:29 crc kubenswrapper[4740]: I0130 16:33:29.126870 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerStarted","Data":"1eb2017f9c8ec8b7558bf9de9fe2adbecdf7a43d86dea00327c66705e217c2d8"} Jan 30 16:33:30 crc kubenswrapper[4740]: I0130 16:33:30.142077 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerStarted","Data":"be26fa97302bc4f36360d250033a95d41a43fc3a386c76ab910b945f1b4e7468"} Jan 30 16:33:32 crc kubenswrapper[4740]: I0130 16:33:32.169014 4740 generic.go:334] "Generic (PLEG): container finished" podID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerID="be26fa97302bc4f36360d250033a95d41a43fc3a386c76ab910b945f1b4e7468" exitCode=0 Jan 30 16:33:32 crc kubenswrapper[4740]: I0130 16:33:32.169149 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerDied","Data":"be26fa97302bc4f36360d250033a95d41a43fc3a386c76ab910b945f1b4e7468"} Jan 30 16:33:33 crc kubenswrapper[4740]: I0130 16:33:33.182816 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerStarted","Data":"3650a492400229ece42cb7593f217767b3f7ce6be419d65abc80e79457dfc068"} Jan 30 16:33:33 crc kubenswrapper[4740]: I0130 16:33:33.205325 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9tpgj" podStartSLOduration=2.779116788 podStartE2EDuration="6.205300005s" podCreationTimestamp="2026-01-30 16:33:27 +0000 UTC" firstStartedPulling="2026-01-30 16:33:29.131707495 +0000 UTC m=+2257.768770094" lastFinishedPulling="2026-01-30 16:33:32.557890712 +0000 UTC m=+2261.194953311" observedRunningTime="2026-01-30 16:33:33.201231664 +0000 UTC m=+2261.838294263" watchObservedRunningTime="2026-01-30 16:33:33.205300005 +0000 UTC m=+2261.842362604" Jan 30 16:33:36 crc kubenswrapper[4740]: I0130 16:33:36.686562 4740 scope.go:117] "RemoveContainer" containerID="79c79f0e0908c57dfcdcea35437ea56783f95ff00ddd358c149b22ded4d10267" Jan 30 16:33:36 crc kubenswrapper[4740]: I0130 16:33:36.740021 4740 scope.go:117] "RemoveContainer" containerID="e5569e41650d8cc725597ad721a090481c7b7f95dafbc9916f0d23819cb3cabc" Jan 30 16:33:36 crc kubenswrapper[4740]: I0130 16:33:36.782908 4740 scope.go:117] "RemoveContainer" containerID="2f5ce6e264a40d5796fae903ead052c7cdfa8bde92eeeff8fabd0dcc83223b14" Jan 30 16:33:37 crc kubenswrapper[4740]: I0130 16:33:37.671177 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:37 crc kubenswrapper[4740]: I0130 16:33:37.671247 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:37 crc kubenswrapper[4740]: I0130 16:33:37.730128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:38 crc kubenswrapper[4740]: I0130 16:33:38.299866 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:38 crc kubenswrapper[4740]: I0130 16:33:38.358863 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:40 crc kubenswrapper[4740]: I0130 16:33:40.268029 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9tpgj" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="registry-server" containerID="cri-o://3650a492400229ece42cb7593f217767b3f7ce6be419d65abc80e79457dfc068" gracePeriod=2 Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.285369 4740 generic.go:334] "Generic (PLEG): container finished" podID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerID="3650a492400229ece42cb7593f217767b3f7ce6be419d65abc80e79457dfc068" exitCode=0 Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.285412 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerDied","Data":"3650a492400229ece42cb7593f217767b3f7ce6be419d65abc80e79457dfc068"} Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.285499 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tpgj" event={"ID":"461385c2-08ed-4e31-87dc-3ce651e496ef","Type":"ContainerDied","Data":"1eb2017f9c8ec8b7558bf9de9fe2adbecdf7a43d86dea00327c66705e217c2d8"} Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.285522 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb2017f9c8ec8b7558bf9de9fe2adbecdf7a43d86dea00327c66705e217c2d8" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.373964 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.449880 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content\") pod \"461385c2-08ed-4e31-87dc-3ce651e496ef\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.450196 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv44s\" (UniqueName: \"kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s\") pod \"461385c2-08ed-4e31-87dc-3ce651e496ef\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.450509 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities\") pod \"461385c2-08ed-4e31-87dc-3ce651e496ef\" (UID: \"461385c2-08ed-4e31-87dc-3ce651e496ef\") " Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.454024 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities" (OuterVolumeSpecName: "utilities") pod "461385c2-08ed-4e31-87dc-3ce651e496ef" (UID: "461385c2-08ed-4e31-87dc-3ce651e496ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.462862 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s" (OuterVolumeSpecName: "kube-api-access-dv44s") pod "461385c2-08ed-4e31-87dc-3ce651e496ef" (UID: "461385c2-08ed-4e31-87dc-3ce651e496ef"). InnerVolumeSpecName "kube-api-access-dv44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.513231 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "461385c2-08ed-4e31-87dc-3ce651e496ef" (UID: "461385c2-08ed-4e31-87dc-3ce651e496ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.555198 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv44s\" (UniqueName: \"kubernetes.io/projected/461385c2-08ed-4e31-87dc-3ce651e496ef-kube-api-access-dv44s\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.555265 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:41 crc kubenswrapper[4740]: I0130 16:33:41.555282 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461385c2-08ed-4e31-87dc-3ce651e496ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:42 crc kubenswrapper[4740]: I0130 16:33:42.302812 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tpgj" Jan 30 16:33:42 crc kubenswrapper[4740]: I0130 16:33:42.352697 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:42 crc kubenswrapper[4740]: I0130 16:33:42.367302 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9tpgj"] Jan 30 16:33:43 crc kubenswrapper[4740]: I0130 16:33:43.358871 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" path="/var/lib/kubelet/pods/461385c2-08ed-4e31-87dc-3ce651e496ef/volumes" Jan 30 16:33:49 crc kubenswrapper[4740]: I0130 16:33:49.046662 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9g9ww"] Jan 30 16:33:49 crc kubenswrapper[4740]: I0130 16:33:49.063616 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9g9ww"] Jan 30 16:33:49 crc kubenswrapper[4740]: I0130 16:33:49.366426 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0da4cd8-4245-4e9d-bafb-5054e6ed647c" path="/var/lib/kubelet/pods/f0da4cd8-4245-4e9d-bafb-5054e6ed647c/volumes" Jan 30 16:33:51 crc kubenswrapper[4740]: I0130 16:33:51.420016 4740 generic.go:334] "Generic (PLEG): container finished" podID="b913a0e7-afaa-4afb-9520-7930587f3b2f" containerID="62cc72c9ad26d09acc1a92eaa02fcafe04cc398ff5e39bdea78dbfcf8520ad93" exitCode=0 Jan 30 16:33:51 crc kubenswrapper[4740]: I0130 16:33:51.420232 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" event={"ID":"b913a0e7-afaa-4afb-9520-7930587f3b2f","Type":"ContainerDied","Data":"62cc72c9ad26d09acc1a92eaa02fcafe04cc398ff5e39bdea78dbfcf8520ad93"} Jan 30 16:33:52 crc kubenswrapper[4740]: I0130 16:33:52.942824 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.069318 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sj68\" (UniqueName: \"kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68\") pod \"b913a0e7-afaa-4afb-9520-7930587f3b2f\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.070040 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam\") pod \"b913a0e7-afaa-4afb-9520-7930587f3b2f\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.070396 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory\") pod \"b913a0e7-afaa-4afb-9520-7930587f3b2f\" (UID: \"b913a0e7-afaa-4afb-9520-7930587f3b2f\") " Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.077815 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68" (OuterVolumeSpecName: "kube-api-access-8sj68") pod "b913a0e7-afaa-4afb-9520-7930587f3b2f" (UID: "b913a0e7-afaa-4afb-9520-7930587f3b2f"). InnerVolumeSpecName "kube-api-access-8sj68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.104483 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory" (OuterVolumeSpecName: "inventory") pod "b913a0e7-afaa-4afb-9520-7930587f3b2f" (UID: "b913a0e7-afaa-4afb-9520-7930587f3b2f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.108974 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b913a0e7-afaa-4afb-9520-7930587f3b2f" (UID: "b913a0e7-afaa-4afb-9520-7930587f3b2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.173921 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.173981 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sj68\" (UniqueName: \"kubernetes.io/projected/b913a0e7-afaa-4afb-9520-7930587f3b2f-kube-api-access-8sj68\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.173993 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b913a0e7-afaa-4afb-9520-7930587f3b2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.450977 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" event={"ID":"b913a0e7-afaa-4afb-9520-7930587f3b2f","Type":"ContainerDied","Data":"69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147"} Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.451042 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a55f5f2a40e2bf8169586537831a02e29c8a32aa3be48467cee6048c487147" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.451103 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.650639 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sklln"] Jan 30 16:33:53 crc kubenswrapper[4740]: E0130 16:33:53.651261 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="extract-utilities" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651285 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="extract-utilities" Jan 30 16:33:53 crc kubenswrapper[4740]: E0130 16:33:53.651301 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="registry-server" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651313 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="registry-server" Jan 30 16:33:53 crc kubenswrapper[4740]: E0130 16:33:53.651328 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b913a0e7-afaa-4afb-9520-7930587f3b2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651340 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b913a0e7-afaa-4afb-9520-7930587f3b2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:53 crc kubenswrapper[4740]: E0130 16:33:53.651379 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="extract-content" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651386 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="extract-content" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651626 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="461385c2-08ed-4e31-87dc-3ce651e496ef" containerName="registry-server" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.651653 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b913a0e7-afaa-4afb-9520-7930587f3b2f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.652737 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.656257 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.656421 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.656559 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.656557 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.666343 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sklln"] Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.792136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpq6b\" (UniqueName: \"kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.792487 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.792842 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.896112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpq6b\" (UniqueName: \"kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.896519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.896690 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.900977 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.905228 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.929524 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpq6b\" (UniqueName: \"kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b\") pod \"ssh-known-hosts-edpm-deployment-sklln\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:53 crc kubenswrapper[4740]: I0130 16:33:53.971576 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:33:54 crc kubenswrapper[4740]: I0130 16:33:54.455306 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:33:54 crc kubenswrapper[4740]: I0130 16:33:54.455886 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:33:54 crc kubenswrapper[4740]: I0130 16:33:54.603012 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sklln"] Jan 30 16:33:55 crc kubenswrapper[4740]: I0130 16:33:55.479627 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" event={"ID":"f616041d-231d-409f-b1eb-bb0939ada6d6","Type":"ContainerStarted","Data":"b857f3cbc202b32620dce48af52c850f8cb13f2093c20a88f2671a64a3eb61c0"} Jan 30 16:33:55 crc kubenswrapper[4740]: I0130 16:33:55.479955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" event={"ID":"f616041d-231d-409f-b1eb-bb0939ada6d6","Type":"ContainerStarted","Data":"ad9136dd322b6ae3e76a539c375c22a967ed2620b4d4adf493a19f60f865068b"} Jan 30 16:33:55 crc kubenswrapper[4740]: I0130 16:33:55.507130 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" podStartSLOduration=2.0304875510000002 podStartE2EDuration="2.507105852s" podCreationTimestamp="2026-01-30 16:33:53 +0000 UTC" firstStartedPulling="2026-01-30 16:33:54.603080404 +0000 UTC m=+2283.240143003" lastFinishedPulling="2026-01-30 16:33:55.079698705 +0000 UTC m=+2283.716761304" observedRunningTime="2026-01-30 16:33:55.494741635 +0000 UTC m=+2284.131804234" watchObservedRunningTime="2026-01-30 16:33:55.507105852 +0000 UTC m=+2284.144168451" Jan 30 16:34:01 crc kubenswrapper[4740]: I0130 16:34:01.553012 4740 generic.go:334] "Generic (PLEG): container finished" podID="f616041d-231d-409f-b1eb-bb0939ada6d6" containerID="b857f3cbc202b32620dce48af52c850f8cb13f2093c20a88f2671a64a3eb61c0" exitCode=0 Jan 30 16:34:01 crc kubenswrapper[4740]: I0130 16:34:01.553215 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" event={"ID":"f616041d-231d-409f-b1eb-bb0939ada6d6","Type":"ContainerDied","Data":"b857f3cbc202b32620dce48af52c850f8cb13f2093c20a88f2671a64a3eb61c0"} Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.122164 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.268823 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpq6b\" (UniqueName: \"kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b\") pod \"f616041d-231d-409f-b1eb-bb0939ada6d6\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.269467 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam\") pod \"f616041d-231d-409f-b1eb-bb0939ada6d6\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.269605 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0\") pod \"f616041d-231d-409f-b1eb-bb0939ada6d6\" (UID: \"f616041d-231d-409f-b1eb-bb0939ada6d6\") " Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.275502 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b" (OuterVolumeSpecName: "kube-api-access-gpq6b") pod "f616041d-231d-409f-b1eb-bb0939ada6d6" (UID: "f616041d-231d-409f-b1eb-bb0939ada6d6"). InnerVolumeSpecName "kube-api-access-gpq6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.299519 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f616041d-231d-409f-b1eb-bb0939ada6d6" (UID: "f616041d-231d-409f-b1eb-bb0939ada6d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.299932 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f616041d-231d-409f-b1eb-bb0939ada6d6" (UID: "f616041d-231d-409f-b1eb-bb0939ada6d6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.372895 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.373221 4740 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f616041d-231d-409f-b1eb-bb0939ada6d6-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.373337 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpq6b\" (UniqueName: \"kubernetes.io/projected/f616041d-231d-409f-b1eb-bb0939ada6d6-kube-api-access-gpq6b\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.623532 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" event={"ID":"f616041d-231d-409f-b1eb-bb0939ada6d6","Type":"ContainerDied","Data":"ad9136dd322b6ae3e76a539c375c22a967ed2620b4d4adf493a19f60f865068b"} Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.623621 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad9136dd322b6ae3e76a539c375c22a967ed2620b4d4adf493a19f60f865068b" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.623770 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sklln" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.686195 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f"] Jan 30 16:34:03 crc kubenswrapper[4740]: E0130 16:34:03.686840 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f616041d-231d-409f-b1eb-bb0939ada6d6" containerName="ssh-known-hosts-edpm-deployment" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.686864 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f616041d-231d-409f-b1eb-bb0939ada6d6" containerName="ssh-known-hosts-edpm-deployment" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.687171 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f616041d-231d-409f-b1eb-bb0939ada6d6" containerName="ssh-known-hosts-edpm-deployment" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.689458 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.695028 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.695287 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.695451 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.695584 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.698008 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f"] Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.819963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.820111 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rwd\" (UniqueName: \"kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.820150 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.922285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rwd\" (UniqueName: \"kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.922383 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.922556 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.927443 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.927928 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:03 crc kubenswrapper[4740]: I0130 16:34:03.942441 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rwd\" (UniqueName: \"kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xqw8f\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:04 crc kubenswrapper[4740]: I0130 16:34:04.017139 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:04 crc kubenswrapper[4740]: I0130 16:34:04.605227 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f"] Jan 30 16:34:04 crc kubenswrapper[4740]: I0130 16:34:04.636699 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" event={"ID":"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c","Type":"ContainerStarted","Data":"7e6147a285cc801aeeb3ff7685822bdbcbd8bfa2cadab3e6b2c897c64307530c"} Jan 30 16:34:05 crc kubenswrapper[4740]: I0130 16:34:05.654388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" event={"ID":"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c","Type":"ContainerStarted","Data":"3ccdd425336aeaffe435ba32b342127d5293cedd15921eb6c6935fcad4279623"} Jan 30 16:34:05 crc kubenswrapper[4740]: I0130 16:34:05.683246 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" podStartSLOduration=2.258909266 podStartE2EDuration="2.683221796s" podCreationTimestamp="2026-01-30 16:34:03 +0000 UTC" firstStartedPulling="2026-01-30 16:34:04.616803816 +0000 UTC m=+2293.253866435" lastFinishedPulling="2026-01-30 16:34:05.041116366 +0000 UTC m=+2293.678178965" observedRunningTime="2026-01-30 16:34:05.676267233 +0000 UTC m=+2294.313329842" watchObservedRunningTime="2026-01-30 16:34:05.683221796 +0000 UTC m=+2294.320284395" Jan 30 16:34:12 crc kubenswrapper[4740]: I0130 16:34:12.770135 4740 generic.go:334] "Generic (PLEG): container finished" podID="4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" containerID="3ccdd425336aeaffe435ba32b342127d5293cedd15921eb6c6935fcad4279623" exitCode=0 Jan 30 16:34:12 crc kubenswrapper[4740]: I0130 16:34:12.770187 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" event={"ID":"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c","Type":"ContainerDied","Data":"3ccdd425336aeaffe435ba32b342127d5293cedd15921eb6c6935fcad4279623"} Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.327023 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.512882 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam\") pod \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.513609 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory\") pod \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.513645 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rwd\" (UniqueName: \"kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd\") pod \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\" (UID: \"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c\") " Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.521131 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd" (OuterVolumeSpecName: "kube-api-access-88rwd") pod "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" (UID: "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c"). InnerVolumeSpecName "kube-api-access-88rwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.588014 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory" (OuterVolumeSpecName: "inventory") pod "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" (UID: "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.598696 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" (UID: "4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.616647 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.616674 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rwd\" (UniqueName: \"kubernetes.io/projected/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-kube-api-access-88rwd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.616686 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.792364 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" event={"ID":"4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c","Type":"ContainerDied","Data":"7e6147a285cc801aeeb3ff7685822bdbcbd8bfa2cadab3e6b2c897c64307530c"} Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.792428 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e6147a285cc801aeeb3ff7685822bdbcbd8bfa2cadab3e6b2c897c64307530c" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.792457 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xqw8f" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.837864 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:14 crc kubenswrapper[4740]: E0130 16:34:14.838563 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.838589 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.838874 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.841050 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.899587 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.925089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.925189 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcsd\" (UniqueName: \"kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.925283 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.940170 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6"] Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.943027 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.950695 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.950936 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.951145 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.951178 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:34:14 crc kubenswrapper[4740]: I0130 16:34:14.962395 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6"] Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.030111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.030187 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcsd\" (UniqueName: \"kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.030260 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.030769 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.031506 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.048802 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcsd\" (UniqueName: \"kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd\") pod \"redhat-marketplace-vfjv9\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.133295 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8q8f\" (UniqueName: \"kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.133709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.133902 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.162739 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.235975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.236181 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8q8f\" (UniqueName: \"kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.236266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.244278 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.248779 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.289676 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8q8f\" (UniqueName: \"kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.586966 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.689748 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:15 crc kubenswrapper[4740]: W0130 16:34:15.715488 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68af2bed_78de_4742_a232_926641529c1f.slice/crio-5156409c0625042ae0f67a30d010fae57d3974c3246592e6c2519f6898975e92 WatchSource:0}: Error finding container 5156409c0625042ae0f67a30d010fae57d3974c3246592e6c2519f6898975e92: Status 404 returned error can't find the container with id 5156409c0625042ae0f67a30d010fae57d3974c3246592e6c2519f6898975e92 Jan 30 16:34:15 crc kubenswrapper[4740]: I0130 16:34:15.821830 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerStarted","Data":"5156409c0625042ae0f67a30d010fae57d3974c3246592e6c2519f6898975e92"} Jan 30 16:34:16 crc kubenswrapper[4740]: I0130 16:34:16.147855 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6"] Jan 30 16:34:16 crc kubenswrapper[4740]: W0130 16:34:16.149853 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f74f942_192f_46c2_b1fd_df038a2fd9e7.slice/crio-6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987 WatchSource:0}: Error finding container 6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987: Status 404 returned error can't find the container with id 6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987 Jan 30 16:34:16 crc kubenswrapper[4740]: I0130 16:34:16.836836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" event={"ID":"9f74f942-192f-46c2-b1fd-df038a2fd9e7","Type":"ContainerStarted","Data":"6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987"} Jan 30 16:34:16 crc kubenswrapper[4740]: I0130 16:34:16.840751 4740 generic.go:334] "Generic (PLEG): container finished" podID="68af2bed-78de-4742-a232-926641529c1f" containerID="f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c" exitCode=0 Jan 30 16:34:16 crc kubenswrapper[4740]: I0130 16:34:16.840817 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerDied","Data":"f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c"} Jan 30 16:34:17 crc kubenswrapper[4740]: I0130 16:34:17.853487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" event={"ID":"9f74f942-192f-46c2-b1fd-df038a2fd9e7","Type":"ContainerStarted","Data":"4de8f7c4101e57e3769dfe89646f6c516475f886ab8ddab26ce4566885bec8f7"} Jan 30 16:34:17 crc kubenswrapper[4740]: I0130 16:34:17.885120 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" podStartSLOduration=3.239516437 podStartE2EDuration="3.885095654s" podCreationTimestamp="2026-01-30 16:34:14 +0000 UTC" firstStartedPulling="2026-01-30 16:34:16.153288975 +0000 UTC m=+2304.790351574" lastFinishedPulling="2026-01-30 16:34:16.798868182 +0000 UTC m=+2305.435930791" observedRunningTime="2026-01-30 16:34:17.873413433 +0000 UTC m=+2306.510476042" watchObservedRunningTime="2026-01-30 16:34:17.885095654 +0000 UTC m=+2306.522158273" Jan 30 16:34:18 crc kubenswrapper[4740]: I0130 16:34:18.880650 4740 generic.go:334] "Generic (PLEG): container finished" podID="68af2bed-78de-4742-a232-926641529c1f" containerID="fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17" exitCode=0 Jan 30 16:34:18 crc kubenswrapper[4740]: I0130 16:34:18.882250 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerDied","Data":"fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17"} Jan 30 16:34:19 crc kubenswrapper[4740]: I0130 16:34:19.894748 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerStarted","Data":"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40"} Jan 30 16:34:19 crc kubenswrapper[4740]: I0130 16:34:19.917666 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vfjv9" podStartSLOduration=3.451065283 podStartE2EDuration="5.917647479s" podCreationTimestamp="2026-01-30 16:34:14 +0000 UTC" firstStartedPulling="2026-01-30 16:34:16.843119793 +0000 UTC m=+2305.480182392" lastFinishedPulling="2026-01-30 16:34:19.309701989 +0000 UTC m=+2307.946764588" observedRunningTime="2026-01-30 16:34:19.912389828 +0000 UTC m=+2308.549452417" watchObservedRunningTime="2026-01-30 16:34:19.917647479 +0000 UTC m=+2308.554710078" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.199611 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.202964 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.216595 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.342552 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsg9\" (UniqueName: \"kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.342629 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.343231 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.446531 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxsg9\" (UniqueName: \"kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.446607 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.446775 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.447259 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.447298 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.468053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxsg9\" (UniqueName: \"kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9\") pod \"community-operators-k6h6r\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:21 crc kubenswrapper[4740]: I0130 16:34:21.541938 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:22 crc kubenswrapper[4740]: I0130 16:34:22.157967 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:22 crc kubenswrapper[4740]: I0130 16:34:22.940895 4740 generic.go:334] "Generic (PLEG): container finished" podID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerID="a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382" exitCode=0 Jan 30 16:34:22 crc kubenswrapper[4740]: I0130 16:34:22.940950 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerDied","Data":"a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382"} Jan 30 16:34:22 crc kubenswrapper[4740]: I0130 16:34:22.940986 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerStarted","Data":"ab84699d8830aed17bad86d701ba07747f362f11a8253fcd20f0c318a0ab155d"} Jan 30 16:34:24 crc kubenswrapper[4740]: I0130 16:34:24.454997 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:34:24 crc kubenswrapper[4740]: I0130 16:34:24.456323 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:34:24 crc kubenswrapper[4740]: I0130 16:34:24.964048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerStarted","Data":"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3"} Jan 30 16:34:25 crc kubenswrapper[4740]: I0130 16:34:25.163517 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:25 crc kubenswrapper[4740]: I0130 16:34:25.163575 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:25 crc kubenswrapper[4740]: I0130 16:34:25.223129 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:25 crc kubenswrapper[4740]: I0130 16:34:25.979942 4740 generic.go:334] "Generic (PLEG): container finished" podID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerID="6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3" exitCode=0 Jan 30 16:34:25 crc kubenswrapper[4740]: I0130 16:34:25.980024 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerDied","Data":"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3"} Jan 30 16:34:26 crc kubenswrapper[4740]: I0130 16:34:26.056299 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:26 crc kubenswrapper[4740]: E0130 16:34:26.180342 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f74f942_192f_46c2_b1fd_df038a2fd9e7.slice/crio-4de8f7c4101e57e3769dfe89646f6c516475f886ab8ddab26ce4566885bec8f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f74f942_192f_46c2_b1fd_df038a2fd9e7.slice/crio-conmon-4de8f7c4101e57e3769dfe89646f6c516475f886ab8ddab26ce4566885bec8f7.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:34:26 crc kubenswrapper[4740]: I0130 16:34:26.594539 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:26 crc kubenswrapper[4740]: I0130 16:34:26.995387 4740 generic.go:334] "Generic (PLEG): container finished" podID="9f74f942-192f-46c2-b1fd-df038a2fd9e7" containerID="4de8f7c4101e57e3769dfe89646f6c516475f886ab8ddab26ce4566885bec8f7" exitCode=0 Jan 30 16:34:26 crc kubenswrapper[4740]: I0130 16:34:26.995488 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" event={"ID":"9f74f942-192f-46c2-b1fd-df038a2fd9e7","Type":"ContainerDied","Data":"4de8f7c4101e57e3769dfe89646f6c516475f886ab8ddab26ce4566885bec8f7"} Jan 30 16:34:26 crc kubenswrapper[4740]: I0130 16:34:26.998021 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerStarted","Data":"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17"} Jan 30 16:34:27 crc kubenswrapper[4740]: I0130 16:34:27.044558 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k6h6r" podStartSLOduration=2.495400089 podStartE2EDuration="6.044530357s" podCreationTimestamp="2026-01-30 16:34:21 +0000 UTC" firstStartedPulling="2026-01-30 16:34:22.942839787 +0000 UTC m=+2311.579902386" lastFinishedPulling="2026-01-30 16:34:26.491970045 +0000 UTC m=+2315.129032654" observedRunningTime="2026-01-30 16:34:27.030960859 +0000 UTC m=+2315.668023468" watchObservedRunningTime="2026-01-30 16:34:27.044530357 +0000 UTC m=+2315.681592976" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.009293 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vfjv9" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="registry-server" containerID="cri-o://d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40" gracePeriod=2 Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.680576 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.687615 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738060 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8q8f\" (UniqueName: \"kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f\") pod \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738444 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory\") pod \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738636 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mcsd\" (UniqueName: \"kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd\") pod \"68af2bed-78de-4742-a232-926641529c1f\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738708 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities\") pod \"68af2bed-78de-4742-a232-926641529c1f\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738746 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam\") pod \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\" (UID: \"9f74f942-192f-46c2-b1fd-df038a2fd9e7\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.738830 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content\") pod \"68af2bed-78de-4742-a232-926641529c1f\" (UID: \"68af2bed-78de-4742-a232-926641529c1f\") " Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.740757 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities" (OuterVolumeSpecName: "utilities") pod "68af2bed-78de-4742-a232-926641529c1f" (UID: "68af2bed-78de-4742-a232-926641529c1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.744830 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f" (OuterVolumeSpecName: "kube-api-access-v8q8f") pod "9f74f942-192f-46c2-b1fd-df038a2fd9e7" (UID: "9f74f942-192f-46c2-b1fd-df038a2fd9e7"). InnerVolumeSpecName "kube-api-access-v8q8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.748203 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd" (OuterVolumeSpecName: "kube-api-access-6mcsd") pod "68af2bed-78de-4742-a232-926641529c1f" (UID: "68af2bed-78de-4742-a232-926641529c1f"). InnerVolumeSpecName "kube-api-access-6mcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.762931 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68af2bed-78de-4742-a232-926641529c1f" (UID: "68af2bed-78de-4742-a232-926641529c1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.771769 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory" (OuterVolumeSpecName: "inventory") pod "9f74f942-192f-46c2-b1fd-df038a2fd9e7" (UID: "9f74f942-192f-46c2-b1fd-df038a2fd9e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.789795 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f74f942-192f-46c2-b1fd-df038a2fd9e7" (UID: "9f74f942-192f-46c2-b1fd-df038a2fd9e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842038 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842086 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8q8f\" (UniqueName: \"kubernetes.io/projected/9f74f942-192f-46c2-b1fd-df038a2fd9e7-kube-api-access-v8q8f\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842104 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842116 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mcsd\" (UniqueName: \"kubernetes.io/projected/68af2bed-78de-4742-a232-926641529c1f-kube-api-access-6mcsd\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842129 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68af2bed-78de-4742-a232-926641529c1f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:28 crc kubenswrapper[4740]: I0130 16:34:28.842142 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f74f942-192f-46c2-b1fd-df038a2fd9e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.022398 4740 generic.go:334] "Generic (PLEG): container finished" podID="68af2bed-78de-4742-a232-926641529c1f" containerID="d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40" exitCode=0 Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.022490 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerDied","Data":"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40"} Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.022538 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vfjv9" event={"ID":"68af2bed-78de-4742-a232-926641529c1f","Type":"ContainerDied","Data":"5156409c0625042ae0f67a30d010fae57d3974c3246592e6c2519f6898975e92"} Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.022562 4740 scope.go:117] "RemoveContainer" containerID="d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.022746 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vfjv9" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.028231 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" event={"ID":"9f74f942-192f-46c2-b1fd-df038a2fd9e7","Type":"ContainerDied","Data":"6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987"} Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.028279 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.028291 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bdb741c4c1f82f56bf2c6cfc7df0708bd4bbb9464b1858f7a2513365e530987" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.063874 4740 scope.go:117] "RemoveContainer" containerID="fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.084965 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.093102 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vfjv9"] Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.101669 4740 scope.go:117] "RemoveContainer" containerID="f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.135641 4740 scope.go:117] "RemoveContainer" containerID="d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.136274 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40\": container with ID starting with d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40 not found: ID does not exist" containerID="d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.136319 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40"} err="failed to get container status \"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40\": rpc error: code = NotFound desc = could not find container \"d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40\": container with ID starting with d69a355976d930be2e7dd7b921e266d8b6589da11dbd159a6c61ba2a9f738a40 not found: ID does not exist" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.136376 4740 scope.go:117] "RemoveContainer" containerID="fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.136844 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17\": container with ID starting with fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17 not found: ID does not exist" containerID="fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.136919 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17"} err="failed to get container status \"fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17\": rpc error: code = NotFound desc = could not find container \"fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17\": container with ID starting with fda7a8055e72f3c905275637758b75730dea677df884371325061824cb3cbb17 not found: ID does not exist" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.136956 4740 scope.go:117] "RemoveContainer" containerID="f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.137540 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c\": container with ID starting with f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c not found: ID does not exist" containerID="f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.137580 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c"} err="failed to get container status \"f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c\": rpc error: code = NotFound desc = could not find container \"f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c\": container with ID starting with f0aa4234cdeaa3fc838751bcd16964a4e1fb0fb8e93054bd9c77bfc3e144571c not found: ID does not exist" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.146735 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f"] Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.147304 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="registry-server" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147324 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="registry-server" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.147385 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f74f942-192f-46c2-b1fd-df038a2fd9e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147393 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f74f942-192f-46c2-b1fd-df038a2fd9e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.147409 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="extract-utilities" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147414 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="extract-utilities" Jan 30 16:34:29 crc kubenswrapper[4740]: E0130 16:34:29.147449 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="extract-content" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147455 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="extract-content" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147657 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f74f942-192f-46c2-b1fd-df038a2fd9e7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.147676 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="68af2bed-78de-4742-a232-926641529c1f" containerName="registry-server" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.148533 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.151738 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.153224 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.153872 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.153972 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.153998 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.154059 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.160188 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.165684 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.186950 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f"] Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252607 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252638 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252684 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252901 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.252993 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.253089 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4tk\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.253574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.253657 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.253919 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.253963 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.254038 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.254323 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.254506 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.348664 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68af2bed-78de-4742-a232-926641529c1f" path="/var/lib/kubelet/pods/68af2bed-78de-4742-a232-926641529c1f/volumes" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.356745 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.356821 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.356862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.356943 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.356978 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357587 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357635 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4tk\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357762 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357911 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.357947 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.358062 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.358143 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.361478 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.362022 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.362830 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.363623 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.364598 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.365334 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.365454 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.365767 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.366379 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.367714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.369930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.372606 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.376397 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.376718 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4tk\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:29 crc kubenswrapper[4740]: I0130 16:34:29.523158 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:34:30 crc kubenswrapper[4740]: I0130 16:34:30.108877 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f"] Jan 30 16:34:30 crc kubenswrapper[4740]: W0130 16:34:30.117754 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddafe432a_92c3_4e2a_8e5b_6f4579049269.slice/crio-85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc WatchSource:0}: Error finding container 85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc: Status 404 returned error can't find the container with id 85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.052681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" event={"ID":"dafe432a-92c3-4e2a-8e5b-6f4579049269","Type":"ContainerStarted","Data":"24bb07291557ed5c68ef7c2aa9cb04f787573ea227446e61a4ca7698b06b5d5c"} Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.052997 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" event={"ID":"dafe432a-92c3-4e2a-8e5b-6f4579049269","Type":"ContainerStarted","Data":"85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc"} Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.078045 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" podStartSLOduration=1.523667383 podStartE2EDuration="2.078014319s" podCreationTimestamp="2026-01-30 16:34:29 +0000 UTC" firstStartedPulling="2026-01-30 16:34:30.120648232 +0000 UTC m=+2318.757710831" lastFinishedPulling="2026-01-30 16:34:30.674995168 +0000 UTC m=+2319.312057767" observedRunningTime="2026-01-30 16:34:31.069982879 +0000 UTC m=+2319.707045478" watchObservedRunningTime="2026-01-30 16:34:31.078014319 +0000 UTC m=+2319.715076918" Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.543138 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.544897 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:31 crc kubenswrapper[4740]: I0130 16:34:31.645460 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:32 crc kubenswrapper[4740]: I0130 16:34:32.125617 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:32 crc kubenswrapper[4740]: I0130 16:34:32.793404 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.090178 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k6h6r" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="registry-server" containerID="cri-o://3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17" gracePeriod=2 Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.704833 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.835649 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities\") pod \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.835930 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content\") pod \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.835955 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxsg9\" (UniqueName: \"kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9\") pod \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\" (UID: \"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d\") " Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.836719 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities" (OuterVolumeSpecName: "utilities") pod "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" (UID: "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.847727 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9" (OuterVolumeSpecName: "kube-api-access-jxsg9") pod "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" (UID: "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d"). InnerVolumeSpecName "kube-api-access-jxsg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.938764 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:34 crc kubenswrapper[4740]: I0130 16:34:34.938829 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxsg9\" (UniqueName: \"kubernetes.io/projected/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-kube-api-access-jxsg9\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.111752 4740 generic.go:334] "Generic (PLEG): container finished" podID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerID="3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17" exitCode=0 Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.111811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerDied","Data":"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17"} Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.111847 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6h6r" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.111873 4740 scope.go:117] "RemoveContainer" containerID="3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.111853 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6h6r" event={"ID":"66c0ae0d-0717-4783-8a0b-3f1d6bb5592d","Type":"ContainerDied","Data":"ab84699d8830aed17bad86d701ba07747f362f11a8253fcd20f0c318a0ab155d"} Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.137869 4740 scope.go:117] "RemoveContainer" containerID="6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.161914 4740 scope.go:117] "RemoveContainer" containerID="a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.224934 4740 scope.go:117] "RemoveContainer" containerID="3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17" Jan 30 16:34:35 crc kubenswrapper[4740]: E0130 16:34:35.227098 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17\": container with ID starting with 3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17 not found: ID does not exist" containerID="3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.227139 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17"} err="failed to get container status \"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17\": rpc error: code = NotFound desc = could not find container \"3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17\": container with ID starting with 3c9756a8e7518696ff57ab57fed6844fd3f123afa41cae905aaa08d2a4fd2e17 not found: ID does not exist" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.227170 4740 scope.go:117] "RemoveContainer" containerID="6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3" Jan 30 16:34:35 crc kubenswrapper[4740]: E0130 16:34:35.227655 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3\": container with ID starting with 6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3 not found: ID does not exist" containerID="6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.227683 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3"} err="failed to get container status \"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3\": rpc error: code = NotFound desc = could not find container \"6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3\": container with ID starting with 6c119d885fe89f96a6f8f91b12a806807ac5f404ad81f40c5876c34064b7d4c3 not found: ID does not exist" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.227699 4740 scope.go:117] "RemoveContainer" containerID="a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382" Jan 30 16:34:35 crc kubenswrapper[4740]: E0130 16:34:35.228194 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382\": container with ID starting with a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382 not found: ID does not exist" containerID="a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.228265 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382"} err="failed to get container status \"a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382\": rpc error: code = NotFound desc = could not find container \"a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382\": container with ID starting with a4a3f5536c82218b42d03c58c2ee40cdc58e06ffb8cf10d7b8c1fd75e6079382 not found: ID does not exist" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.327106 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" (UID: "66c0ae0d-0717-4783-8a0b-3f1d6bb5592d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.347312 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.437641 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:35 crc kubenswrapper[4740]: I0130 16:34:35.446926 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k6h6r"] Jan 30 16:34:36 crc kubenswrapper[4740]: I0130 16:34:36.939014 4740 scope.go:117] "RemoveContainer" containerID="1b8bd308acaad5bda22ba9cdf85eea3009e8f5b0a835b284c23faf9856264ed4" Jan 30 16:34:37 crc kubenswrapper[4740]: I0130 16:34:37.054221 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-6mkbr"] Jan 30 16:34:37 crc kubenswrapper[4740]: I0130 16:34:37.069904 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-6mkbr"] Jan 30 16:34:37 crc kubenswrapper[4740]: I0130 16:34:37.348700 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" path="/var/lib/kubelet/pods/66c0ae0d-0717-4783-8a0b-3f1d6bb5592d/volumes" Jan 30 16:34:37 crc kubenswrapper[4740]: I0130 16:34:37.349762 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7949eee-0a06-4fe5-9cfc-b08bfdecc24a" path="/var/lib/kubelet/pods/c7949eee-0a06-4fe5-9cfc-b08bfdecc24a/volumes" Jan 30 16:34:43 crc kubenswrapper[4740]: I0130 16:34:43.041981 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-s62jp"] Jan 30 16:34:43 crc kubenswrapper[4740]: I0130 16:34:43.056816 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-s62jp"] Jan 30 16:34:43 crc kubenswrapper[4740]: I0130 16:34:43.362054 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4b4df0-4413-4c6d-bba1-14398f0acb36" path="/var/lib/kubelet/pods/2c4b4df0-4413-4c6d-bba1-14398f0acb36/volumes" Jan 30 16:34:54 crc kubenswrapper[4740]: I0130 16:34:54.455623 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:34:54 crc kubenswrapper[4740]: I0130 16:34:54.456660 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:34:54 crc kubenswrapper[4740]: I0130 16:34:54.456742 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:34:54 crc kubenswrapper[4740]: I0130 16:34:54.457944 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:34:54 crc kubenswrapper[4740]: I0130 16:34:54.458013 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" gracePeriod=600 Jan 30 16:34:54 crc kubenswrapper[4740]: E0130 16:34:54.584954 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:34:55 crc kubenswrapper[4740]: I0130 16:34:55.366254 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" exitCode=0 Jan 30 16:34:55 crc kubenswrapper[4740]: I0130 16:34:55.366643 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100"} Jan 30 16:34:55 crc kubenswrapper[4740]: I0130 16:34:55.366848 4740 scope.go:117] "RemoveContainer" containerID="34c630f6b84227a1977b0bdc0b5ca309de8059895b7246dc8e8a5fd3593d976c" Jan 30 16:34:55 crc kubenswrapper[4740]: I0130 16:34:55.367779 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:34:55 crc kubenswrapper[4740]: E0130 16:34:55.368168 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:35:07 crc kubenswrapper[4740]: E0130 16:35:07.342268 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddafe432a_92c3_4e2a_8e5b_6f4579049269.slice/crio-conmon-24bb07291557ed5c68ef7c2aa9cb04f787573ea227446e61a4ca7698b06b5d5c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:35:07 crc kubenswrapper[4740]: I0130 16:35:07.508756 4740 generic.go:334] "Generic (PLEG): container finished" podID="dafe432a-92c3-4e2a-8e5b-6f4579049269" containerID="24bb07291557ed5c68ef7c2aa9cb04f787573ea227446e61a4ca7698b06b5d5c" exitCode=0 Jan 30 16:35:07 crc kubenswrapper[4740]: I0130 16:35:07.508845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" event={"ID":"dafe432a-92c3-4e2a-8e5b-6f4579049269","Type":"ContainerDied","Data":"24bb07291557ed5c68ef7c2aa9cb04f787573ea227446e61a4ca7698b06b5d5c"} Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.171041 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277041 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277167 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277244 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277271 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277344 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277436 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277474 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277512 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277538 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277566 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277617 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc4tk\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277657 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277673 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.277819 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0\") pod \"dafe432a-92c3-4e2a-8e5b-6f4579049269\" (UID: \"dafe432a-92c3-4e2a-8e5b-6f4579049269\") " Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.286216 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.286248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.286426 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.286470 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.288312 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.288448 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.288599 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.289332 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.289620 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.292255 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.298273 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.298379 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk" (OuterVolumeSpecName: "kube-api-access-fc4tk") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "kube-api-access-fc4tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.320857 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory" (OuterVolumeSpecName: "inventory") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.327680 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dafe432a-92c3-4e2a-8e5b-6f4579049269" (UID: "dafe432a-92c3-4e2a-8e5b-6f4579049269"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381341 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc4tk\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-kube-api-access-fc4tk\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381414 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381428 4740 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381442 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381457 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381478 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381493 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381507 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381522 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381534 4740 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381547 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381559 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381576 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/dafe432a-92c3-4e2a-8e5b-6f4579049269-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.381589 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dafe432a-92c3-4e2a-8e5b-6f4579049269-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.545085 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" event={"ID":"dafe432a-92c3-4e2a-8e5b-6f4579049269","Type":"ContainerDied","Data":"85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc"} Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.545152 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85deea3fa622ea8548e95398a69112989180159f85a4a0fcde2eba72306240dc" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.545177 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.643934 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd"] Jan 30 16:35:09 crc kubenswrapper[4740]: E0130 16:35:09.644458 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="extract-utilities" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644479 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="extract-utilities" Jan 30 16:35:09 crc kubenswrapper[4740]: E0130 16:35:09.644500 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="registry-server" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644507 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="registry-server" Jan 30 16:35:09 crc kubenswrapper[4740]: E0130 16:35:09.644515 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafe432a-92c3-4e2a-8e5b-6f4579049269" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644523 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafe432a-92c3-4e2a-8e5b-6f4579049269" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 16:35:09 crc kubenswrapper[4740]: E0130 16:35:09.644547 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="extract-content" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644552 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="extract-content" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644781 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c0ae0d-0717-4783-8a0b-3f1d6bb5592d" containerName="registry-server" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.644799 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafe432a-92c3-4e2a-8e5b-6f4579049269" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.645782 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.648699 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.648946 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.649066 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.649297 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.650511 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.661904 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd"] Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.790248 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.790307 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhnr\" (UniqueName: \"kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.790337 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.790572 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.790831 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.893897 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.894149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.894184 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhnr\" (UniqueName: \"kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.894207 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.894235 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.895556 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.904053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.906209 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.909888 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.917189 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhnr\" (UniqueName: \"kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-7mgcd\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:09 crc kubenswrapper[4740]: I0130 16:35:09.969490 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:35:10 crc kubenswrapper[4740]: I0130 16:35:10.335725 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:35:10 crc kubenswrapper[4740]: E0130 16:35:10.336541 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:35:10 crc kubenswrapper[4740]: I0130 16:35:10.645648 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd"] Jan 30 16:35:11 crc kubenswrapper[4740]: I0130 16:35:11.575122 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" event={"ID":"bdddad8e-9863-4a79-9883-cd130b7fe9f2","Type":"ContainerStarted","Data":"f65e6b8e2239538b3d6f5469e4bb8e332bc8360c1437a885e8e9c1e921321c01"} Jan 30 16:35:11 crc kubenswrapper[4740]: I0130 16:35:11.575649 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" event={"ID":"bdddad8e-9863-4a79-9883-cd130b7fe9f2","Type":"ContainerStarted","Data":"f1997e548023e2f0a1ff6242e18031379c2af1c6135380f2de74c868320be141"} Jan 30 16:35:11 crc kubenswrapper[4740]: I0130 16:35:11.607875 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" podStartSLOduration=2.099086676 podStartE2EDuration="2.607842577s" podCreationTimestamp="2026-01-30 16:35:09 +0000 UTC" firstStartedPulling="2026-01-30 16:35:10.650112213 +0000 UTC m=+2359.287174832" lastFinishedPulling="2026-01-30 16:35:11.158868134 +0000 UTC m=+2359.795930733" observedRunningTime="2026-01-30 16:35:11.600932765 +0000 UTC m=+2360.237995364" watchObservedRunningTime="2026-01-30 16:35:11.607842577 +0000 UTC m=+2360.244905186" Jan 30 16:35:21 crc kubenswrapper[4740]: I0130 16:35:21.336121 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:35:21 crc kubenswrapper[4740]: E0130 16:35:21.339205 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:35:32 crc kubenswrapper[4740]: I0130 16:35:32.336298 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:35:32 crc kubenswrapper[4740]: E0130 16:35:32.337412 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:35:37 crc kubenswrapper[4740]: I0130 16:35:37.077328 4740 scope.go:117] "RemoveContainer" containerID="4953016886d1c9584b7136093872873275328ea09d160cac58cdb0ec43efa2bf" Jan 30 16:35:37 crc kubenswrapper[4740]: I0130 16:35:37.136507 4740 scope.go:117] "RemoveContainer" containerID="f5591d7dc942b0127629f7755fd469f1481988fd6c775b26634319207dcebda7" Jan 30 16:35:44 crc kubenswrapper[4740]: I0130 16:35:44.336332 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:35:44 crc kubenswrapper[4740]: E0130 16:35:44.338345 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:35:56 crc kubenswrapper[4740]: I0130 16:35:56.335563 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:35:56 crc kubenswrapper[4740]: E0130 16:35:56.336408 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:36:10 crc kubenswrapper[4740]: I0130 16:36:10.272689 4740 generic.go:334] "Generic (PLEG): container finished" podID="bdddad8e-9863-4a79-9883-cd130b7fe9f2" containerID="f65e6b8e2239538b3d6f5469e4bb8e332bc8360c1437a885e8e9c1e921321c01" exitCode=0 Jan 30 16:36:10 crc kubenswrapper[4740]: I0130 16:36:10.272933 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" event={"ID":"bdddad8e-9863-4a79-9883-cd130b7fe9f2","Type":"ContainerDied","Data":"f65e6b8e2239538b3d6f5469e4bb8e332bc8360c1437a885e8e9c1e921321c01"} Jan 30 16:36:11 crc kubenswrapper[4740]: I0130 16:36:11.335897 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:36:11 crc kubenswrapper[4740]: E0130 16:36:11.336186 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:36:11 crc kubenswrapper[4740]: I0130 16:36:11.960692 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.029701 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory\") pod \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.029985 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0\") pod \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.030116 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam\") pod \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.030164 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle\") pod \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.030309 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhnr\" (UniqueName: \"kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr\") pod \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\" (UID: \"bdddad8e-9863-4a79-9883-cd130b7fe9f2\") " Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.037469 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bdddad8e-9863-4a79-9883-cd130b7fe9f2" (UID: "bdddad8e-9863-4a79-9883-cd130b7fe9f2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.039404 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr" (OuterVolumeSpecName: "kube-api-access-fvhnr") pod "bdddad8e-9863-4a79-9883-cd130b7fe9f2" (UID: "bdddad8e-9863-4a79-9883-cd130b7fe9f2"). InnerVolumeSpecName "kube-api-access-fvhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.075951 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory" (OuterVolumeSpecName: "inventory") pod "bdddad8e-9863-4a79-9883-cd130b7fe9f2" (UID: "bdddad8e-9863-4a79-9883-cd130b7fe9f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.077422 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bdddad8e-9863-4a79-9883-cd130b7fe9f2" (UID: "bdddad8e-9863-4a79-9883-cd130b7fe9f2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.086892 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bdddad8e-9863-4a79-9883-cd130b7fe9f2" (UID: "bdddad8e-9863-4a79-9883-cd130b7fe9f2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.134980 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvhnr\" (UniqueName: \"kubernetes.io/projected/bdddad8e-9863-4a79-9883-cd130b7fe9f2-kube-api-access-fvhnr\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.135050 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.135089 4740 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.135101 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.135113 4740 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdddad8e-9863-4a79-9883-cd130b7fe9f2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.293927 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" event={"ID":"bdddad8e-9863-4a79-9883-cd130b7fe9f2","Type":"ContainerDied","Data":"f1997e548023e2f0a1ff6242e18031379c2af1c6135380f2de74c868320be141"} Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.293991 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1997e548023e2f0a1ff6242e18031379c2af1c6135380f2de74c868320be141" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.294061 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-7mgcd" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.442460 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw"] Jan 30 16:36:12 crc kubenswrapper[4740]: E0130 16:36:12.443403 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdddad8e-9863-4a79-9883-cd130b7fe9f2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.443425 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdddad8e-9863-4a79-9883-cd130b7fe9f2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.443759 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdddad8e-9863-4a79-9883-cd130b7fe9f2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.444911 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.448809 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.448900 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.448810 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.449042 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.449265 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.455289 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.461000 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw"] Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.544722 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.544803 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.544859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.544950 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgml\" (UniqueName: \"kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.545048 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.545177 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648131 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgml\" (UniqueName: \"kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648372 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648568 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648630 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648662 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.648714 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.655231 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.655663 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.655800 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.660053 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.661388 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.669141 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgml\" (UniqueName: \"kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:12 crc kubenswrapper[4740]: I0130 16:36:12.770824 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:36:13 crc kubenswrapper[4740]: I0130 16:36:13.396054 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw"] Jan 30 16:36:13 crc kubenswrapper[4740]: W0130 16:36:13.412694 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32077e1_24f2_46ea_868d_914b78472dfe.slice/crio-2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0 WatchSource:0}: Error finding container 2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0: Status 404 returned error can't find the container with id 2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0 Jan 30 16:36:13 crc kubenswrapper[4740]: I0130 16:36:13.419787 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:36:14 crc kubenswrapper[4740]: I0130 16:36:14.318623 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" event={"ID":"c32077e1-24f2-46ea-868d-914b78472dfe","Type":"ContainerStarted","Data":"ed3c0f7f73f94216e211febd2885fc4534b0b7ee02041e2b880e55e25f624b08"} Jan 30 16:36:14 crc kubenswrapper[4740]: I0130 16:36:14.319039 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" event={"ID":"c32077e1-24f2-46ea-868d-914b78472dfe","Type":"ContainerStarted","Data":"2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0"} Jan 30 16:36:14 crc kubenswrapper[4740]: I0130 16:36:14.348370 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" podStartSLOduration=1.9103856609999998 podStartE2EDuration="2.34831985s" podCreationTimestamp="2026-01-30 16:36:12 +0000 UTC" firstStartedPulling="2026-01-30 16:36:13.419514423 +0000 UTC m=+2422.056577012" lastFinishedPulling="2026-01-30 16:36:13.857448602 +0000 UTC m=+2422.494511201" observedRunningTime="2026-01-30 16:36:14.340545986 +0000 UTC m=+2422.977608605" watchObservedRunningTime="2026-01-30 16:36:14.34831985 +0000 UTC m=+2422.985382449" Jan 30 16:36:26 crc kubenswrapper[4740]: I0130 16:36:26.336563 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:36:26 crc kubenswrapper[4740]: E0130 16:36:26.337937 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:36:37 crc kubenswrapper[4740]: I0130 16:36:37.337302 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:36:37 crc kubenswrapper[4740]: E0130 16:36:37.338282 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:36:50 crc kubenswrapper[4740]: I0130 16:36:50.335976 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:36:50 crc kubenswrapper[4740]: E0130 16:36:50.336701 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:37:00 crc kubenswrapper[4740]: I0130 16:37:00.870221 4740 generic.go:334] "Generic (PLEG): container finished" podID="c32077e1-24f2-46ea-868d-914b78472dfe" containerID="ed3c0f7f73f94216e211febd2885fc4534b0b7ee02041e2b880e55e25f624b08" exitCode=0 Jan 30 16:37:00 crc kubenswrapper[4740]: I0130 16:37:00.870328 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" event={"ID":"c32077e1-24f2-46ea-868d-914b78472dfe","Type":"ContainerDied","Data":"ed3c0f7f73f94216e211febd2885fc4534b0b7ee02041e2b880e55e25f624b08"} Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.409629 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.518283 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhgml\" (UniqueName: \"kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.518439 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.519734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.519854 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.519943 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.519975 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c32077e1-24f2-46ea-868d-914b78472dfe\" (UID: \"c32077e1-24f2-46ea-868d-914b78472dfe\") " Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.525624 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml" (OuterVolumeSpecName: "kube-api-access-zhgml") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "kube-api-access-zhgml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.530882 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.554474 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.563937 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.566477 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory" (OuterVolumeSpecName: "inventory") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.568290 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c32077e1-24f2-46ea-868d-914b78472dfe" (UID: "c32077e1-24f2-46ea-868d-914b78472dfe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623068 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623106 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623122 4740 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623135 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhgml\" (UniqueName: \"kubernetes.io/projected/c32077e1-24f2-46ea-868d-914b78472dfe-kube-api-access-zhgml\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623145 4740 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.623157 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32077e1-24f2-46ea-868d-914b78472dfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.894608 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" event={"ID":"c32077e1-24f2-46ea-868d-914b78472dfe","Type":"ContainerDied","Data":"2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0"} Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.894659 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0205ee3fff84ff752349f6bda55b07742c6f5a50a75dfc7247aada64cb4ca0" Jan 30 16:37:02 crc kubenswrapper[4740]: I0130 16:37:02.894706 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.049117 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp"] Jan 30 16:37:03 crc kubenswrapper[4740]: E0130 16:37:03.050147 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32077e1-24f2-46ea-868d-914b78472dfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.050178 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32077e1-24f2-46ea-868d-914b78472dfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.050467 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32077e1-24f2-46ea-868d-914b78472dfe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.052116 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.054736 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.054964 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.055122 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.055623 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.055674 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.065412 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp"] Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.136700 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.136771 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmnj\" (UniqueName: \"kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.136936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.136986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.137025 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.239570 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.239653 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lmnj\" (UniqueName: \"kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.239765 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.239807 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.239843 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.245038 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.245093 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.248394 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.249280 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.259224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lmnj\" (UniqueName: \"kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.347819 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:37:03 crc kubenswrapper[4740]: E0130 16:37:03.348184 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.377197 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:37:03 crc kubenswrapper[4740]: I0130 16:37:03.980428 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp"] Jan 30 16:37:04 crc kubenswrapper[4740]: I0130 16:37:04.961689 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" event={"ID":"198ac256-3459-4e44-9c68-9efd25cf1ec5","Type":"ContainerStarted","Data":"98c2b79df90f6ef64910f5634e17e98ac220744ef98cf1102ba1a2c04415d4f0"} Jan 30 16:37:05 crc kubenswrapper[4740]: I0130 16:37:05.974086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" event={"ID":"198ac256-3459-4e44-9c68-9efd25cf1ec5","Type":"ContainerStarted","Data":"d2add6c15c8a72bb57f5607c0bf553280e143bc46f7ecf7627b7d2594192f2b7"} Jan 30 16:37:05 crc kubenswrapper[4740]: I0130 16:37:05.994674 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" podStartSLOduration=2.27949853 podStartE2EDuration="2.994650785s" podCreationTimestamp="2026-01-30 16:37:03 +0000 UTC" firstStartedPulling="2026-01-30 16:37:03.984792409 +0000 UTC m=+2472.621855008" lastFinishedPulling="2026-01-30 16:37:04.699944654 +0000 UTC m=+2473.337007263" observedRunningTime="2026-01-30 16:37:05.992003919 +0000 UTC m=+2474.629066508" watchObservedRunningTime="2026-01-30 16:37:05.994650785 +0000 UTC m=+2474.631713384" Jan 30 16:37:16 crc kubenswrapper[4740]: I0130 16:37:16.336551 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:37:16 crc kubenswrapper[4740]: E0130 16:37:16.337617 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:37:28 crc kubenswrapper[4740]: I0130 16:37:28.336065 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:37:28 crc kubenswrapper[4740]: E0130 16:37:28.336804 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:37:43 crc kubenswrapper[4740]: I0130 16:37:43.350851 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:37:43 crc kubenswrapper[4740]: E0130 16:37:43.352182 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:37:58 crc kubenswrapper[4740]: I0130 16:37:58.335974 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:37:58 crc kubenswrapper[4740]: E0130 16:37:58.336960 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:38:13 crc kubenswrapper[4740]: I0130 16:38:13.361844 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:38:13 crc kubenswrapper[4740]: E0130 16:38:13.364148 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:38:24 crc kubenswrapper[4740]: I0130 16:38:24.335592 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:38:24 crc kubenswrapper[4740]: E0130 16:38:24.336385 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:38:35 crc kubenswrapper[4740]: I0130 16:38:35.336243 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:38:35 crc kubenswrapper[4740]: E0130 16:38:35.337123 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:38:48 crc kubenswrapper[4740]: I0130 16:38:48.337167 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:38:48 crc kubenswrapper[4740]: E0130 16:38:48.338285 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:39:03 crc kubenswrapper[4740]: I0130 16:39:03.344100 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:39:03 crc kubenswrapper[4740]: E0130 16:39:03.344955 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:39:16 crc kubenswrapper[4740]: I0130 16:39:16.335671 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:39:16 crc kubenswrapper[4740]: E0130 16:39:16.336471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:39:30 crc kubenswrapper[4740]: I0130 16:39:30.336279 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:39:30 crc kubenswrapper[4740]: E0130 16:39:30.337182 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:39:37 crc kubenswrapper[4740]: I0130 16:39:37.290487 4740 scope.go:117] "RemoveContainer" containerID="3650a492400229ece42cb7593f217767b3f7ce6be419d65abc80e79457dfc068" Jan 30 16:39:37 crc kubenswrapper[4740]: I0130 16:39:37.414586 4740 scope.go:117] "RemoveContainer" containerID="be26fa97302bc4f36360d250033a95d41a43fc3a386c76ab910b945f1b4e7468" Jan 30 16:39:37 crc kubenswrapper[4740]: I0130 16:39:37.442506 4740 scope.go:117] "RemoveContainer" containerID="a0ae0f051f75549a7a1e85995ba8c02418b7d90328e1ec02cf23c08c1d0642b4" Jan 30 16:39:44 crc kubenswrapper[4740]: I0130 16:39:44.335875 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:39:44 crc kubenswrapper[4740]: E0130 16:39:44.337829 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:39:58 crc kubenswrapper[4740]: I0130 16:39:58.336720 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:39:58 crc kubenswrapper[4740]: I0130 16:39:58.895332 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c"} Jan 30 16:40:54 crc kubenswrapper[4740]: I0130 16:40:54.503564 4740 generic.go:334] "Generic (PLEG): container finished" podID="198ac256-3459-4e44-9c68-9efd25cf1ec5" containerID="d2add6c15c8a72bb57f5607c0bf553280e143bc46f7ecf7627b7d2594192f2b7" exitCode=0 Jan 30 16:40:54 crc kubenswrapper[4740]: I0130 16:40:54.503632 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" event={"ID":"198ac256-3459-4e44-9c68-9efd25cf1ec5","Type":"ContainerDied","Data":"d2add6c15c8a72bb57f5607c0bf553280e143bc46f7ecf7627b7d2594192f2b7"} Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.059476 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.199183 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam\") pod \"198ac256-3459-4e44-9c68-9efd25cf1ec5\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.199326 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle\") pod \"198ac256-3459-4e44-9c68-9efd25cf1ec5\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.199444 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory\") pod \"198ac256-3459-4e44-9c68-9efd25cf1ec5\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.199547 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lmnj\" (UniqueName: \"kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj\") pod \"198ac256-3459-4e44-9c68-9efd25cf1ec5\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.199827 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0\") pod \"198ac256-3459-4e44-9c68-9efd25cf1ec5\" (UID: \"198ac256-3459-4e44-9c68-9efd25cf1ec5\") " Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.206199 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "198ac256-3459-4e44-9c68-9efd25cf1ec5" (UID: "198ac256-3459-4e44-9c68-9efd25cf1ec5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.207009 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj" (OuterVolumeSpecName: "kube-api-access-9lmnj") pod "198ac256-3459-4e44-9c68-9efd25cf1ec5" (UID: "198ac256-3459-4e44-9c68-9efd25cf1ec5"). InnerVolumeSpecName "kube-api-access-9lmnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.235667 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory" (OuterVolumeSpecName: "inventory") pod "198ac256-3459-4e44-9c68-9efd25cf1ec5" (UID: "198ac256-3459-4e44-9c68-9efd25cf1ec5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.236391 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "198ac256-3459-4e44-9c68-9efd25cf1ec5" (UID: "198ac256-3459-4e44-9c68-9efd25cf1ec5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.237526 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "198ac256-3459-4e44-9c68-9efd25cf1ec5" (UID: "198ac256-3459-4e44-9c68-9efd25cf1ec5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.307338 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.307400 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lmnj\" (UniqueName: \"kubernetes.io/projected/198ac256-3459-4e44-9c68-9efd25cf1ec5-kube-api-access-9lmnj\") on node \"crc\" DevicePath \"\"" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.307414 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.307426 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.307435 4740 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198ac256-3459-4e44-9c68-9efd25cf1ec5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.381072 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn"] Jan 30 16:40:56 crc kubenswrapper[4740]: E0130 16:40:56.381865 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198ac256-3459-4e44-9c68-9efd25cf1ec5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.381880 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="198ac256-3459-4e44-9c68-9efd25cf1ec5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.382121 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="198ac256-3459-4e44-9c68-9efd25cf1ec5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.383018 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.388025 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.388214 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.389900 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.408781 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn"] Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511397 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511515 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511557 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511587 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511687 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511757 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511838 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511873 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.511962 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrczv\" (UniqueName: \"kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.536307 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" event={"ID":"198ac256-3459-4e44-9c68-9efd25cf1ec5","Type":"ContainerDied","Data":"98c2b79df90f6ef64910f5634e17e98ac220744ef98cf1102ba1a2c04415d4f0"} Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.536372 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c2b79df90f6ef64910f5634e17e98ac220744ef98cf1102ba1a2c04415d4f0" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.536459 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614231 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614317 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614378 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614401 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614447 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614479 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614522 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614546 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.614603 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrczv\" (UniqueName: \"kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.615710 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.619370 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.620115 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.620243 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.620391 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.620520 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.621141 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.622533 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.650622 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrczv\" (UniqueName: \"kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-58bwn\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:56 crc kubenswrapper[4740]: I0130 16:40:56.731498 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:40:57 crc kubenswrapper[4740]: I0130 16:40:57.285560 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn"] Jan 30 16:40:57 crc kubenswrapper[4740]: I0130 16:40:57.547116 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" event={"ID":"ffb086ab-4d15-4da9-babd-b3f544f4a26b","Type":"ContainerStarted","Data":"33dd554d65c0278d32b861c6eeac3215161f8be5a21b453ecd3a4bff449092fa"} Jan 30 16:40:58 crc kubenswrapper[4740]: I0130 16:40:58.560518 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" event={"ID":"ffb086ab-4d15-4da9-babd-b3f544f4a26b","Type":"ContainerStarted","Data":"b9995c13e8ab6493b6c08a148b11d8ac31f1b38111608e89628eac92cc8f58b1"} Jan 30 16:40:58 crc kubenswrapper[4740]: I0130 16:40:58.599267 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" podStartSLOduration=2.106025564 podStartE2EDuration="2.59922162s" podCreationTimestamp="2026-01-30 16:40:56 +0000 UTC" firstStartedPulling="2026-01-30 16:40:57.304920339 +0000 UTC m=+2705.941982938" lastFinishedPulling="2026-01-30 16:40:57.798116395 +0000 UTC m=+2706.435178994" observedRunningTime="2026-01-30 16:40:58.58275459 +0000 UTC m=+2707.219817189" watchObservedRunningTime="2026-01-30 16:40:58.59922162 +0000 UTC m=+2707.236284259" Jan 30 16:42:24 crc kubenswrapper[4740]: I0130 16:42:24.454738 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:42:24 crc kubenswrapper[4740]: I0130 16:42:24.456604 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:42:54 crc kubenswrapper[4740]: I0130 16:42:54.455267 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:42:54 crc kubenswrapper[4740]: I0130 16:42:54.456204 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.208880 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.213616 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.230635 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.231098 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.231685 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.231798 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j72c\" (UniqueName: \"kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.333871 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.333975 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.334093 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j72c\" (UniqueName: \"kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.334505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.334555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.359424 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j72c\" (UniqueName: \"kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c\") pod \"redhat-operators-9vfhs\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:08 crc kubenswrapper[4740]: I0130 16:43:08.575907 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:09 crc kubenswrapper[4740]: I0130 16:43:09.187213 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:10 crc kubenswrapper[4740]: I0130 16:43:10.176295 4740 generic.go:334] "Generic (PLEG): container finished" podID="9803beca-ebf9-4062-af76-637cc515cd7a" containerID="dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d" exitCode=0 Jan 30 16:43:10 crc kubenswrapper[4740]: I0130 16:43:10.176515 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerDied","Data":"dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d"} Jan 30 16:43:10 crc kubenswrapper[4740]: I0130 16:43:10.176935 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerStarted","Data":"1e828274b45a9e97045fce5d7804db01a5ede7cd7c3d20366c1bc303d9c506f2"} Jan 30 16:43:10 crc kubenswrapper[4740]: I0130 16:43:10.179548 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:43:11 crc kubenswrapper[4740]: I0130 16:43:11.189972 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerStarted","Data":"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79"} Jan 30 16:43:12 crc kubenswrapper[4740]: I0130 16:43:12.205378 4740 generic.go:334] "Generic (PLEG): container finished" podID="ffb086ab-4d15-4da9-babd-b3f544f4a26b" containerID="b9995c13e8ab6493b6c08a148b11d8ac31f1b38111608e89628eac92cc8f58b1" exitCode=0 Jan 30 16:43:12 crc kubenswrapper[4740]: I0130 16:43:12.205488 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" event={"ID":"ffb086ab-4d15-4da9-babd-b3f544f4a26b","Type":"ContainerDied","Data":"b9995c13e8ab6493b6c08a148b11d8ac31f1b38111608e89628eac92cc8f58b1"} Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.809620 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.910629 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.910711 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.910792 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.910877 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.910981 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrczv\" (UniqueName: \"kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.911207 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.911281 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.911421 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.911475 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle\") pod \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\" (UID: \"ffb086ab-4d15-4da9-babd-b3f544f4a26b\") " Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.924660 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv" (OuterVolumeSpecName: "kube-api-access-lrczv") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "kube-api-access-lrczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.934038 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.950083 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.950343 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.954068 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.954659 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.968451 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory" (OuterVolumeSpecName: "inventory") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.968923 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:13 crc kubenswrapper[4740]: I0130 16:43:13.977904 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffb086ab-4d15-4da9-babd-b3f544f4a26b" (UID: "ffb086ab-4d15-4da9-babd-b3f544f4a26b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015443 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015483 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015497 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015507 4740 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015517 4740 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015528 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015538 4740 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015549 4740 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ffb086ab-4d15-4da9-babd-b3f544f4a26b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.015559 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrczv\" (UniqueName: \"kubernetes.io/projected/ffb086ab-4d15-4da9-babd-b3f544f4a26b-kube-api-access-lrczv\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.228268 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" event={"ID":"ffb086ab-4d15-4da9-babd-b3f544f4a26b","Type":"ContainerDied","Data":"33dd554d65c0278d32b861c6eeac3215161f8be5a21b453ecd3a4bff449092fa"} Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.228777 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dd554d65c0278d32b861c6eeac3215161f8be5a21b453ecd3a4bff449092fa" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.228324 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-58bwn" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.348715 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj"] Jan 30 16:43:14 crc kubenswrapper[4740]: E0130 16:43:14.349332 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb086ab-4d15-4da9-babd-b3f544f4a26b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.349379 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb086ab-4d15-4da9-babd-b3f544f4a26b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.349626 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb086ab-4d15-4da9-babd-b3f544f4a26b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.350646 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.353951 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.354084 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.354462 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4wn4q" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.355311 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.358965 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.366225 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj"] Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.424851 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425016 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425105 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425132 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdzm\" (UniqueName: \"kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425165 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425199 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.425262 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527321 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527466 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527557 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527589 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdzm\" (UniqueName: \"kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527629 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527671 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.527721 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.532894 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.533041 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.533214 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.533686 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.534479 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.546668 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.552482 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdzm\" (UniqueName: \"kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b74gj\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:14 crc kubenswrapper[4740]: I0130 16:43:14.668724 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:43:15 crc kubenswrapper[4740]: I0130 16:43:15.250463 4740 generic.go:334] "Generic (PLEG): container finished" podID="9803beca-ebf9-4062-af76-637cc515cd7a" containerID="5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79" exitCode=0 Jan 30 16:43:15 crc kubenswrapper[4740]: I0130 16:43:15.250529 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerDied","Data":"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79"} Jan 30 16:43:15 crc kubenswrapper[4740]: I0130 16:43:15.458665 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj"] Jan 30 16:43:16 crc kubenswrapper[4740]: I0130 16:43:16.266506 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerStarted","Data":"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf"} Jan 30 16:43:16 crc kubenswrapper[4740]: I0130 16:43:16.270852 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" event={"ID":"26ccd837-ffdb-4155-b2ad-032ef3dfa49e","Type":"ContainerStarted","Data":"8c312eb18e985f707602bf85691c40772a10280c9d1df3568efc16024ca63006"} Jan 30 16:43:16 crc kubenswrapper[4740]: I0130 16:43:16.270925 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" event={"ID":"26ccd837-ffdb-4155-b2ad-032ef3dfa49e","Type":"ContainerStarted","Data":"2617e6ebe781ee144125e07c723c5dfb89475d5a130155bfee4c2e039decab6f"} Jan 30 16:43:16 crc kubenswrapper[4740]: I0130 16:43:16.290182 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vfhs" podStartSLOduration=2.828798864 podStartE2EDuration="8.290152819s" podCreationTimestamp="2026-01-30 16:43:08 +0000 UTC" firstStartedPulling="2026-01-30 16:43:10.179247524 +0000 UTC m=+2838.816310123" lastFinishedPulling="2026-01-30 16:43:15.640601489 +0000 UTC m=+2844.277664078" observedRunningTime="2026-01-30 16:43:16.288693783 +0000 UTC m=+2844.925756542" watchObservedRunningTime="2026-01-30 16:43:16.290152819 +0000 UTC m=+2844.927215418" Jan 30 16:43:16 crc kubenswrapper[4740]: I0130 16:43:16.312269 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" podStartSLOduration=1.9181761640000001 podStartE2EDuration="2.3122439s" podCreationTimestamp="2026-01-30 16:43:14 +0000 UTC" firstStartedPulling="2026-01-30 16:43:15.463813825 +0000 UTC m=+2844.100876444" lastFinishedPulling="2026-01-30 16:43:15.857881581 +0000 UTC m=+2844.494944180" observedRunningTime="2026-01-30 16:43:16.30821809 +0000 UTC m=+2844.945280699" watchObservedRunningTime="2026-01-30 16:43:16.3122439 +0000 UTC m=+2844.949306499" Jan 30 16:43:18 crc kubenswrapper[4740]: I0130 16:43:18.576451 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:18 crc kubenswrapper[4740]: I0130 16:43:18.576783 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:19 crc kubenswrapper[4740]: I0130 16:43:19.630919 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vfhs" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="registry-server" probeResult="failure" output=< Jan 30 16:43:19 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:43:19 crc kubenswrapper[4740]: > Jan 30 16:43:24 crc kubenswrapper[4740]: I0130 16:43:24.454551 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:43:24 crc kubenswrapper[4740]: I0130 16:43:24.455030 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:43:24 crc kubenswrapper[4740]: I0130 16:43:24.455089 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:43:24 crc kubenswrapper[4740]: I0130 16:43:24.456056 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:43:24 crc kubenswrapper[4740]: I0130 16:43:24.456145 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c" gracePeriod=600 Jan 30 16:43:25 crc kubenswrapper[4740]: I0130 16:43:25.226058 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c" exitCode=0 Jan 30 16:43:25 crc kubenswrapper[4740]: I0130 16:43:25.226261 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c"} Jan 30 16:43:25 crc kubenswrapper[4740]: I0130 16:43:25.226663 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33"} Jan 30 16:43:25 crc kubenswrapper[4740]: I0130 16:43:25.226697 4740 scope.go:117] "RemoveContainer" containerID="d56ad60a2435de89e5c14d86cbd3c31c4a491ab5f9a5a5238413643823a00100" Jan 30 16:43:28 crc kubenswrapper[4740]: I0130 16:43:28.624983 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:28 crc kubenswrapper[4740]: I0130 16:43:28.682849 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:28 crc kubenswrapper[4740]: I0130 16:43:28.869962 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:30 crc kubenswrapper[4740]: I0130 16:43:30.276580 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vfhs" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="registry-server" containerID="cri-o://c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf" gracePeriod=2 Jan 30 16:43:30 crc kubenswrapper[4740]: I0130 16:43:30.824732 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.024707 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities\") pod \"9803beca-ebf9-4062-af76-637cc515cd7a\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.024868 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j72c\" (UniqueName: \"kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c\") pod \"9803beca-ebf9-4062-af76-637cc515cd7a\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.024899 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content\") pod \"9803beca-ebf9-4062-af76-637cc515cd7a\" (UID: \"9803beca-ebf9-4062-af76-637cc515cd7a\") " Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.025735 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities" (OuterVolumeSpecName: "utilities") pod "9803beca-ebf9-4062-af76-637cc515cd7a" (UID: "9803beca-ebf9-4062-af76-637cc515cd7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.026038 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.030926 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c" (OuterVolumeSpecName: "kube-api-access-4j72c") pod "9803beca-ebf9-4062-af76-637cc515cd7a" (UID: "9803beca-ebf9-4062-af76-637cc515cd7a"). InnerVolumeSpecName "kube-api-access-4j72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.129325 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j72c\" (UniqueName: \"kubernetes.io/projected/9803beca-ebf9-4062-af76-637cc515cd7a-kube-api-access-4j72c\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.173773 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9803beca-ebf9-4062-af76-637cc515cd7a" (UID: "9803beca-ebf9-4062-af76-637cc515cd7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.233530 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9803beca-ebf9-4062-af76-637cc515cd7a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.291538 4740 generic.go:334] "Generic (PLEG): container finished" podID="9803beca-ebf9-4062-af76-637cc515cd7a" containerID="c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf" exitCode=0 Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.291599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerDied","Data":"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf"} Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.291641 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vfhs" event={"ID":"9803beca-ebf9-4062-af76-637cc515cd7a","Type":"ContainerDied","Data":"1e828274b45a9e97045fce5d7804db01a5ede7cd7c3d20366c1bc303d9c506f2"} Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.291670 4740 scope.go:117] "RemoveContainer" containerID="c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.291878 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vfhs" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.320012 4740 scope.go:117] "RemoveContainer" containerID="5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.363839 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.365655 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vfhs"] Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.376957 4740 scope.go:117] "RemoveContainer" containerID="dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.431625 4740 scope.go:117] "RemoveContainer" containerID="c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf" Jan 30 16:43:31 crc kubenswrapper[4740]: E0130 16:43:31.432787 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf\": container with ID starting with c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf not found: ID does not exist" containerID="c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.432853 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf"} err="failed to get container status \"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf\": rpc error: code = NotFound desc = could not find container \"c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf\": container with ID starting with c0af8a55b12ccd2b13347d2d848099ac8517c13ec2e31e25235c90210b97aadf not found: ID does not exist" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.432884 4740 scope.go:117] "RemoveContainer" containerID="5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79" Jan 30 16:43:31 crc kubenswrapper[4740]: E0130 16:43:31.433397 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79\": container with ID starting with 5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79 not found: ID does not exist" containerID="5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.433503 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79"} err="failed to get container status \"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79\": rpc error: code = NotFound desc = could not find container \"5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79\": container with ID starting with 5814f3690d08f02be3b100f95cad67e3b053ce050f907f57509767dc29022f79 not found: ID does not exist" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.433595 4740 scope.go:117] "RemoveContainer" containerID="dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d" Jan 30 16:43:31 crc kubenswrapper[4740]: E0130 16:43:31.434070 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d\": container with ID starting with dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d not found: ID does not exist" containerID="dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d" Jan 30 16:43:31 crc kubenswrapper[4740]: I0130 16:43:31.434098 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d"} err="failed to get container status \"dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d\": rpc error: code = NotFound desc = could not find container \"dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d\": container with ID starting with dc1ab3e8c0cece069c3ff739403d0bde058ce263b4baeca0ed6cac07de3be93d not found: ID does not exist" Jan 30 16:43:31 crc kubenswrapper[4740]: E0130 16:43:31.554434 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9803beca_ebf9_4062_af76_637cc515cd7a.slice/crio-1e828274b45a9e97045fce5d7804db01a5ede7cd7c3d20366c1bc303d9c506f2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9803beca_ebf9_4062_af76_637cc515cd7a.slice\": RecentStats: unable to find data in memory cache]" Jan 30 16:43:33 crc kubenswrapper[4740]: I0130 16:43:33.349503 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" path="/var/lib/kubelet/pods/9803beca-ebf9-4062-af76-637cc515cd7a/volumes" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.694983 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:18 crc kubenswrapper[4740]: E0130 16:44:18.697540 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="registry-server" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.697627 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="registry-server" Jan 30 16:44:18 crc kubenswrapper[4740]: E0130 16:44:18.697718 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="extract-utilities" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.697783 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="extract-utilities" Jan 30 16:44:18 crc kubenswrapper[4740]: E0130 16:44:18.697846 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="extract-content" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.697899 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="extract-content" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.698184 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="9803beca-ebf9-4062-af76-637cc515cd7a" containerName="registry-server" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.700048 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.716413 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.750148 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h45k\" (UniqueName: \"kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.750832 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.750986 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.853106 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h45k\" (UniqueName: \"kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.853634 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.853774 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.854176 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.854317 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:18 crc kubenswrapper[4740]: I0130 16:44:18.875588 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h45k\" (UniqueName: \"kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k\") pod \"redhat-marketplace-jwpns\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:19 crc kubenswrapper[4740]: I0130 16:44:19.024135 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:19 crc kubenswrapper[4740]: I0130 16:44:19.562640 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:19 crc kubenswrapper[4740]: I0130 16:44:19.880426 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerStarted","Data":"0dbba564a9ce8a1d1042799d734859b57639f20d131e27f288e11496686beb3e"} Jan 30 16:44:21 crc kubenswrapper[4740]: I0130 16:44:20.893486 4740 generic.go:334] "Generic (PLEG): container finished" podID="782790db-e773-4386-908d-e41eb009a76d" containerID="4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556" exitCode=0 Jan 30 16:44:21 crc kubenswrapper[4740]: I0130 16:44:20.893554 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerDied","Data":"4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556"} Jan 30 16:44:21 crc kubenswrapper[4740]: I0130 16:44:21.905811 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerStarted","Data":"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13"} Jan 30 16:44:22 crc kubenswrapper[4740]: I0130 16:44:22.919195 4740 generic.go:334] "Generic (PLEG): container finished" podID="782790db-e773-4386-908d-e41eb009a76d" containerID="4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13" exitCode=0 Jan 30 16:44:22 crc kubenswrapper[4740]: I0130 16:44:22.919307 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerDied","Data":"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13"} Jan 30 16:44:22 crc kubenswrapper[4740]: E0130 16:44:22.961006 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782790db_e773_4386_908d_e41eb009a76d.slice/crio-conmon-4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782790db_e773_4386_908d_e41eb009a76d.slice/crio-4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:44:23 crc kubenswrapper[4740]: I0130 16:44:23.933754 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerStarted","Data":"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889"} Jan 30 16:44:23 crc kubenswrapper[4740]: I0130 16:44:23.966684 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jwpns" podStartSLOduration=3.542446507 podStartE2EDuration="5.966654165s" podCreationTimestamp="2026-01-30 16:44:18 +0000 UTC" firstStartedPulling="2026-01-30 16:44:20.896342832 +0000 UTC m=+2909.533405431" lastFinishedPulling="2026-01-30 16:44:23.32055048 +0000 UTC m=+2911.957613089" observedRunningTime="2026-01-30 16:44:23.955786534 +0000 UTC m=+2912.592849133" watchObservedRunningTime="2026-01-30 16:44:23.966654165 +0000 UTC m=+2912.603716764" Jan 30 16:44:29 crc kubenswrapper[4740]: I0130 16:44:29.024448 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:29 crc kubenswrapper[4740]: I0130 16:44:29.024821 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:29 crc kubenswrapper[4740]: I0130 16:44:29.083332 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:30 crc kubenswrapper[4740]: I0130 16:44:30.074722 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:30 crc kubenswrapper[4740]: I0130 16:44:30.140941 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.035280 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jwpns" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="registry-server" containerID="cri-o://d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889" gracePeriod=2 Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.794599 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.897697 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities\") pod \"782790db-e773-4386-908d-e41eb009a76d\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.897789 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h45k\" (UniqueName: \"kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k\") pod \"782790db-e773-4386-908d-e41eb009a76d\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.897920 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content\") pod \"782790db-e773-4386-908d-e41eb009a76d\" (UID: \"782790db-e773-4386-908d-e41eb009a76d\") " Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.898870 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities" (OuterVolumeSpecName: "utilities") pod "782790db-e773-4386-908d-e41eb009a76d" (UID: "782790db-e773-4386-908d-e41eb009a76d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.906530 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k" (OuterVolumeSpecName: "kube-api-access-5h45k") pod "782790db-e773-4386-908d-e41eb009a76d" (UID: "782790db-e773-4386-908d-e41eb009a76d"). InnerVolumeSpecName "kube-api-access-5h45k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:44:32 crc kubenswrapper[4740]: I0130 16:44:32.932589 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782790db-e773-4386-908d-e41eb009a76d" (UID: "782790db-e773-4386-908d-e41eb009a76d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.002651 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.002753 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h45k\" (UniqueName: \"kubernetes.io/projected/782790db-e773-4386-908d-e41eb009a76d-kube-api-access-5h45k\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.002768 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782790db-e773-4386-908d-e41eb009a76d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.050296 4740 generic.go:334] "Generic (PLEG): container finished" podID="782790db-e773-4386-908d-e41eb009a76d" containerID="d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889" exitCode=0 Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.050383 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwpns" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.050383 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerDied","Data":"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889"} Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.050467 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwpns" event={"ID":"782790db-e773-4386-908d-e41eb009a76d","Type":"ContainerDied","Data":"0dbba564a9ce8a1d1042799d734859b57639f20d131e27f288e11496686beb3e"} Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.050509 4740 scope.go:117] "RemoveContainer" containerID="d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.094439 4740 scope.go:117] "RemoveContainer" containerID="4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.097002 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.113963 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwpns"] Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.127657 4740 scope.go:117] "RemoveContainer" containerID="4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.170576 4740 scope.go:117] "RemoveContainer" containerID="d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889" Jan 30 16:44:33 crc kubenswrapper[4740]: E0130 16:44:33.171200 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889\": container with ID starting with d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889 not found: ID does not exist" containerID="d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.171259 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889"} err="failed to get container status \"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889\": rpc error: code = NotFound desc = could not find container \"d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889\": container with ID starting with d8868e107cc1e7ba44a155d140ce3fcd6a20f5dbc8fd761667029c27a09c8889 not found: ID does not exist" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.171301 4740 scope.go:117] "RemoveContainer" containerID="4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13" Jan 30 16:44:33 crc kubenswrapper[4740]: E0130 16:44:33.171986 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13\": container with ID starting with 4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13 not found: ID does not exist" containerID="4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.172033 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13"} err="failed to get container status \"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13\": rpc error: code = NotFound desc = could not find container \"4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13\": container with ID starting with 4d88f98155c7076380cb34e95b13ca1cd854510dd5753a24180fb5f53d3aaa13 not found: ID does not exist" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.172080 4740 scope.go:117] "RemoveContainer" containerID="4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556" Jan 30 16:44:33 crc kubenswrapper[4740]: E0130 16:44:33.172717 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556\": container with ID starting with 4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556 not found: ID does not exist" containerID="4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.172747 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556"} err="failed to get container status \"4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556\": rpc error: code = NotFound desc = could not find container \"4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556\": container with ID starting with 4499496b24ab7334de7b84e1a26f9a25913f305e59074f8107697c5689aa5556 not found: ID does not exist" Jan 30 16:44:33 crc kubenswrapper[4740]: E0130 16:44:33.272474 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782790db_e773_4386_908d_e41eb009a76d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod782790db_e773_4386_908d_e41eb009a76d.slice/crio-0dbba564a9ce8a1d1042799d734859b57639f20d131e27f288e11496686beb3e\": RecentStats: unable to find data in memory cache]" Jan 30 16:44:33 crc kubenswrapper[4740]: I0130 16:44:33.350653 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782790db-e773-4386-908d-e41eb009a76d" path="/var/lib/kubelet/pods/782790db-e773-4386-908d-e41eb009a76d/volumes" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.950281 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:44:46 crc kubenswrapper[4740]: E0130 16:44:46.951464 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="registry-server" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.951486 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="registry-server" Jan 30 16:44:46 crc kubenswrapper[4740]: E0130 16:44:46.951494 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="extract-content" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.951500 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="extract-content" Jan 30 16:44:46 crc kubenswrapper[4740]: E0130 16:44:46.951550 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="extract-utilities" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.951557 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="extract-utilities" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.951763 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="782790db-e773-4386-908d-e41eb009a76d" containerName="registry-server" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.953447 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.962806 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.983084 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.983180 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vkl\" (UniqueName: \"kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:46 crc kubenswrapper[4740]: I0130 16:44:46.983217 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.086527 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.086651 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85vkl\" (UniqueName: \"kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.086687 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.087303 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.087560 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.114115 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vkl\" (UniqueName: \"kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl\") pod \"certified-operators-9qc75\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.297757 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:47 crc kubenswrapper[4740]: I0130 16:44:47.936096 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:44:48 crc kubenswrapper[4740]: I0130 16:44:48.237173 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerID="343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979" exitCode=0 Jan 30 16:44:48 crc kubenswrapper[4740]: I0130 16:44:48.237240 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerDied","Data":"343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979"} Jan 30 16:44:48 crc kubenswrapper[4740]: I0130 16:44:48.237275 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerStarted","Data":"12a0e94e8e8d5f8e48e741367a9a5031cb5edd4f3aa580918365525ff443c2a0"} Jan 30 16:44:50 crc kubenswrapper[4740]: I0130 16:44:50.260583 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerStarted","Data":"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe"} Jan 30 16:44:51 crc kubenswrapper[4740]: I0130 16:44:51.271484 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerID="06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe" exitCode=0 Jan 30 16:44:51 crc kubenswrapper[4740]: I0130 16:44:51.271584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerDied","Data":"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe"} Jan 30 16:44:53 crc kubenswrapper[4740]: I0130 16:44:53.298681 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerStarted","Data":"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810"} Jan 30 16:44:53 crc kubenswrapper[4740]: I0130 16:44:53.331400 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9qc75" podStartSLOduration=3.295186953 podStartE2EDuration="7.33136866s" podCreationTimestamp="2026-01-30 16:44:46 +0000 UTC" firstStartedPulling="2026-01-30 16:44:48.242005813 +0000 UTC m=+2936.879068412" lastFinishedPulling="2026-01-30 16:44:52.27818751 +0000 UTC m=+2940.915250119" observedRunningTime="2026-01-30 16:44:53.319536965 +0000 UTC m=+2941.956599564" watchObservedRunningTime="2026-01-30 16:44:53.33136866 +0000 UTC m=+2941.968431289" Jan 30 16:44:57 crc kubenswrapper[4740]: I0130 16:44:57.298814 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:57 crc kubenswrapper[4740]: I0130 16:44:57.299417 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:57 crc kubenswrapper[4740]: I0130 16:44:57.357071 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:57 crc kubenswrapper[4740]: I0130 16:44:57.418525 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:44:57 crc kubenswrapper[4740]: I0130 16:44:57.594082 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:44:59 crc kubenswrapper[4740]: I0130 16:44:59.365248 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9qc75" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="registry-server" containerID="cri-o://ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810" gracePeriod=2 Jan 30 16:44:59 crc kubenswrapper[4740]: I0130 16:44:59.912292 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.017792 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85vkl\" (UniqueName: \"kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl\") pod \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.018175 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities\") pod \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.018530 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content\") pod \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\" (UID: \"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918\") " Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.019257 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities" (OuterVolumeSpecName: "utilities") pod "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" (UID: "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.031684 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl" (OuterVolumeSpecName: "kube-api-access-85vkl") pod "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" (UID: "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918"). InnerVolumeSpecName "kube-api-access-85vkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.082822 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" (UID: "ef6b6037-65a0-4bf8-9e0d-7a7223bbe918"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.121287 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.121347 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.121380 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85vkl\" (UniqueName: \"kubernetes.io/projected/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918-kube-api-access-85vkl\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.149112 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d"] Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.149806 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="extract-utilities" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.149843 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="extract-utilities" Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.149868 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="extract-content" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.149877 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="extract-content" Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.149935 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="registry-server" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.149943 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="registry-server" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.150226 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerName="registry-server" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.151538 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.158619 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.161662 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d"] Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.164220 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.325745 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27x5x\" (UniqueName: \"kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.325905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.325952 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.378332 4740 generic.go:334] "Generic (PLEG): container finished" podID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" containerID="ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810" exitCode=0 Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.378400 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerDied","Data":"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810"} Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.378439 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qc75" event={"ID":"ef6b6037-65a0-4bf8-9e0d-7a7223bbe918","Type":"ContainerDied","Data":"12a0e94e8e8d5f8e48e741367a9a5031cb5edd4f3aa580918365525ff443c2a0"} Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.378461 4740 scope.go:117] "RemoveContainer" containerID="ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.379572 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qc75" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.401715 4740 scope.go:117] "RemoveContainer" containerID="06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.425590 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.428167 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.428300 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27x5x\" (UniqueName: \"kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.428505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.430223 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.434403 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.435067 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9qc75"] Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.435306 4740 scope.go:117] "RemoveContainer" containerID="343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.446004 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27x5x\" (UniqueName: \"kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x\") pod \"collect-profiles-29496525-pwv7d\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.476899 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.488143 4740 scope.go:117] "RemoveContainer" containerID="ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810" Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.488773 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810\": container with ID starting with ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810 not found: ID does not exist" containerID="ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.488832 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810"} err="failed to get container status \"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810\": rpc error: code = NotFound desc = could not find container \"ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810\": container with ID starting with ce2a49fefe34434338346ee1d4785996d1b6aeda26f1d00c4f89c5898aec6810 not found: ID does not exist" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.488877 4740 scope.go:117] "RemoveContainer" containerID="06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe" Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.489423 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe\": container with ID starting with 06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe not found: ID does not exist" containerID="06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.489461 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe"} err="failed to get container status \"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe\": rpc error: code = NotFound desc = could not find container \"06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe\": container with ID starting with 06e4846633c646aea1e7e86314068e7d96ea4abb6fc672572d4f8ff7203461fe not found: ID does not exist" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.489481 4740 scope.go:117] "RemoveContainer" containerID="343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979" Jan 30 16:45:00 crc kubenswrapper[4740]: E0130 16:45:00.490179 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979\": container with ID starting with 343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979 not found: ID does not exist" containerID="343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.490220 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979"} err="failed to get container status \"343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979\": rpc error: code = NotFound desc = could not find container \"343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979\": container with ID starting with 343dbc4370adab3d1dc0b3255701f379402776d4b71df89ff4718f4ef8622979 not found: ID does not exist" Jan 30 16:45:00 crc kubenswrapper[4740]: I0130 16:45:00.944668 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d"] Jan 30 16:45:00 crc kubenswrapper[4740]: W0130 16:45:00.946245 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1be0ffe_20b2_4ced_9ba3_6ca34b40d005.slice/crio-5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a WatchSource:0}: Error finding container 5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a: Status 404 returned error can't find the container with id 5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a Jan 30 16:45:01 crc kubenswrapper[4740]: I0130 16:45:01.349331 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6b6037-65a0-4bf8-9e0d-7a7223bbe918" path="/var/lib/kubelet/pods/ef6b6037-65a0-4bf8-9e0d-7a7223bbe918/volumes" Jan 30 16:45:01 crc kubenswrapper[4740]: I0130 16:45:01.394866 4740 generic.go:334] "Generic (PLEG): container finished" podID="f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" containerID="c1aba6cd1445eec4800c5e4b6e1741d1398353326a92c628cfd756703fbdd9aa" exitCode=0 Jan 30 16:45:01 crc kubenswrapper[4740]: I0130 16:45:01.394927 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" event={"ID":"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005","Type":"ContainerDied","Data":"c1aba6cd1445eec4800c5e4b6e1741d1398353326a92c628cfd756703fbdd9aa"} Jan 30 16:45:01 crc kubenswrapper[4740]: I0130 16:45:01.394968 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" event={"ID":"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005","Type":"ContainerStarted","Data":"5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a"} Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.823577 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.995570 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27x5x\" (UniqueName: \"kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x\") pod \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.995702 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume\") pod \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.995876 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume\") pod \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\" (UID: \"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005\") " Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.996597 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" (UID: "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:45:02 crc kubenswrapper[4740]: I0130 16:45:02.997133 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.001670 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" (UID: "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.002065 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x" (OuterVolumeSpecName: "kube-api-access-27x5x") pod "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" (UID: "f1be0ffe-20b2-4ced-9ba3-6ca34b40d005"). InnerVolumeSpecName "kube-api-access-27x5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.099068 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27x5x\" (UniqueName: \"kubernetes.io/projected/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-kube-api-access-27x5x\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.099326 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1be0ffe-20b2-4ced-9ba3-6ca34b40d005-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.417185 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.417185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-pwv7d" event={"ID":"f1be0ffe-20b2-4ced-9ba3-6ca34b40d005","Type":"ContainerDied","Data":"5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a"} Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.417300 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e33f1954bcc9c4d04dec422ab5265a8cbd62e2d457ec0d81bb65a60322b2a5a" Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.914833 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz"] Jan 30 16:45:03 crc kubenswrapper[4740]: I0130 16:45:03.926129 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496480-t5mrz"] Jan 30 16:45:05 crc kubenswrapper[4740]: I0130 16:45:05.356556 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b18894-dbba-45c6-ab14-330dbc6e0521" path="/var/lib/kubelet/pods/83b18894-dbba-45c6-ab14-330dbc6e0521/volumes" Jan 30 16:45:24 crc kubenswrapper[4740]: I0130 16:45:24.454543 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:45:24 crc kubenswrapper[4740]: I0130 16:45:24.455234 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.768865 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:36 crc kubenswrapper[4740]: E0130 16:45:36.772730 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" containerName="collect-profiles" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.772881 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" containerName="collect-profiles" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.773304 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be0ffe-20b2-4ced-9ba3-6ca34b40d005" containerName="collect-profiles" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.777030 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.783110 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.887099 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qb2b\" (UniqueName: \"kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.887155 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.887258 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.991018 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qb2b\" (UniqueName: \"kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.991111 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.991202 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.991890 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:36 crc kubenswrapper[4740]: I0130 16:45:36.992070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:37 crc kubenswrapper[4740]: I0130 16:45:37.032071 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qb2b\" (UniqueName: \"kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b\") pod \"community-operators-zfvfk\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:37 crc kubenswrapper[4740]: I0130 16:45:37.106295 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:37 crc kubenswrapper[4740]: I0130 16:45:37.724902 4740 scope.go:117] "RemoveContainer" containerID="7445366211bbb02f9a6e67e116b47e2b97096d3d84250fca04c12ca1ee2e1906" Jan 30 16:45:37 crc kubenswrapper[4740]: I0130 16:45:37.755061 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:38 crc kubenswrapper[4740]: I0130 16:45:38.215839 4740 generic.go:334] "Generic (PLEG): container finished" podID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerID="71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151" exitCode=0 Jan 30 16:45:38 crc kubenswrapper[4740]: I0130 16:45:38.216003 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerDied","Data":"71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151"} Jan 30 16:45:38 crc kubenswrapper[4740]: I0130 16:45:38.217582 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerStarted","Data":"9ba8094421e06a11c1dff41bfc2b8969768412a4a29b5a159cbb9a5631e8eaf4"} Jan 30 16:45:39 crc kubenswrapper[4740]: I0130 16:45:39.230671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerStarted","Data":"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b"} Jan 30 16:45:40 crc kubenswrapper[4740]: I0130 16:45:40.243754 4740 generic.go:334] "Generic (PLEG): container finished" podID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerID="a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b" exitCode=0 Jan 30 16:45:40 crc kubenswrapper[4740]: I0130 16:45:40.243875 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerDied","Data":"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b"} Jan 30 16:45:41 crc kubenswrapper[4740]: I0130 16:45:41.256751 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerStarted","Data":"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce"} Jan 30 16:45:41 crc kubenswrapper[4740]: I0130 16:45:41.286016 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zfvfk" podStartSLOduration=2.8116801689999997 podStartE2EDuration="5.285972052s" podCreationTimestamp="2026-01-30 16:45:36 +0000 UTC" firstStartedPulling="2026-01-30 16:45:38.218252512 +0000 UTC m=+2986.855315111" lastFinishedPulling="2026-01-30 16:45:40.692544385 +0000 UTC m=+2989.329606994" observedRunningTime="2026-01-30 16:45:41.282949436 +0000 UTC m=+2989.920012035" watchObservedRunningTime="2026-01-30 16:45:41.285972052 +0000 UTC m=+2989.923034651" Jan 30 16:45:44 crc kubenswrapper[4740]: I0130 16:45:44.299431 4740 generic.go:334] "Generic (PLEG): container finished" podID="26ccd837-ffdb-4155-b2ad-032ef3dfa49e" containerID="8c312eb18e985f707602bf85691c40772a10280c9d1df3568efc16024ca63006" exitCode=0 Jan 30 16:45:44 crc kubenswrapper[4740]: I0130 16:45:44.299487 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" event={"ID":"26ccd837-ffdb-4155-b2ad-032ef3dfa49e","Type":"ContainerDied","Data":"8c312eb18e985f707602bf85691c40772a10280c9d1df3568efc16024ca63006"} Jan 30 16:45:45 crc kubenswrapper[4740]: I0130 16:45:45.864510 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.019627 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.019744 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.019982 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.020142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.020169 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.020399 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.020429 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdzm\" (UniqueName: \"kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm\") pod \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\" (UID: \"26ccd837-ffdb-4155-b2ad-032ef3dfa49e\") " Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.028796 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.036385 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm" (OuterVolumeSpecName: "kube-api-access-mkdzm") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "kube-api-access-mkdzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.060618 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.061309 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory" (OuterVolumeSpecName: "inventory") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.062312 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.076356 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.085430 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "26ccd837-ffdb-4155-b2ad-032ef3dfa49e" (UID: "26ccd837-ffdb-4155-b2ad-032ef3dfa49e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123222 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123289 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdzm\" (UniqueName: \"kubernetes.io/projected/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-kube-api-access-mkdzm\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123304 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123316 4740 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123331 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123342 4740 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.123493 4740 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ccd837-ffdb-4155-b2ad-032ef3dfa49e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.325308 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" event={"ID":"26ccd837-ffdb-4155-b2ad-032ef3dfa49e","Type":"ContainerDied","Data":"2617e6ebe781ee144125e07c723c5dfb89475d5a130155bfee4c2e039decab6f"} Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.325805 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2617e6ebe781ee144125e07c723c5dfb89475d5a130155bfee4c2e039decab6f" Jan 30 16:45:46 crc kubenswrapper[4740]: I0130 16:45:46.325590 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b74gj" Jan 30 16:45:47 crc kubenswrapper[4740]: I0130 16:45:47.107282 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:47 crc kubenswrapper[4740]: I0130 16:45:47.107360 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:47 crc kubenswrapper[4740]: I0130 16:45:47.159582 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:47 crc kubenswrapper[4740]: I0130 16:45:47.392055 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:47 crc kubenswrapper[4740]: I0130 16:45:47.453412 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:49 crc kubenswrapper[4740]: I0130 16:45:49.366390 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zfvfk" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="registry-server" containerID="cri-o://eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce" gracePeriod=2 Jan 30 16:45:49 crc kubenswrapper[4740]: I0130 16:45:49.947851 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.060200 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content\") pod \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.060797 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qb2b\" (UniqueName: \"kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b\") pod \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.061049 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities\") pod \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\" (UID: \"cb6738ee-fcc6-4fc0-9af2-a39e31f08388\") " Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.062270 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities" (OuterVolumeSpecName: "utilities") pod "cb6738ee-fcc6-4fc0-9af2-a39e31f08388" (UID: "cb6738ee-fcc6-4fc0-9af2-a39e31f08388"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.068557 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b" (OuterVolumeSpecName: "kube-api-access-7qb2b") pod "cb6738ee-fcc6-4fc0-9af2-a39e31f08388" (UID: "cb6738ee-fcc6-4fc0-9af2-a39e31f08388"). InnerVolumeSpecName "kube-api-access-7qb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.165428 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qb2b\" (UniqueName: \"kubernetes.io/projected/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-kube-api-access-7qb2b\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.165481 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.378880 4740 generic.go:334] "Generic (PLEG): container finished" podID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerID="eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce" exitCode=0 Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.378950 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerDied","Data":"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce"} Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.379022 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zfvfk" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.379048 4740 scope.go:117] "RemoveContainer" containerID="eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.379030 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zfvfk" event={"ID":"cb6738ee-fcc6-4fc0-9af2-a39e31f08388","Type":"ContainerDied","Data":"9ba8094421e06a11c1dff41bfc2b8969768412a4a29b5a159cbb9a5631e8eaf4"} Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.403553 4740 scope.go:117] "RemoveContainer" containerID="a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.427788 4740 scope.go:117] "RemoveContainer" containerID="71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.510085 4740 scope.go:117] "RemoveContainer" containerID="eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce" Jan 30 16:45:50 crc kubenswrapper[4740]: E0130 16:45:50.511616 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce\": container with ID starting with eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce not found: ID does not exist" containerID="eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.511759 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce"} err="failed to get container status \"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce\": rpc error: code = NotFound desc = could not find container \"eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce\": container with ID starting with eea6a17c95b34feb2ae5eaed8f201d63d74f597421520efe9402efa1203477ce not found: ID does not exist" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.511890 4740 scope.go:117] "RemoveContainer" containerID="a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b" Jan 30 16:45:50 crc kubenswrapper[4740]: E0130 16:45:50.512581 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b\": container with ID starting with a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b not found: ID does not exist" containerID="a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.512719 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b"} err="failed to get container status \"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b\": rpc error: code = NotFound desc = could not find container \"a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b\": container with ID starting with a1d6fb44b0c5e452db138901c81c6c39496cd703eee8dd75ff0c6cf920b29c9b not found: ID does not exist" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.512768 4740 scope.go:117] "RemoveContainer" containerID="71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151" Jan 30 16:45:50 crc kubenswrapper[4740]: E0130 16:45:50.513391 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151\": container with ID starting with 71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151 not found: ID does not exist" containerID="71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151" Jan 30 16:45:50 crc kubenswrapper[4740]: I0130 16:45:50.513518 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151"} err="failed to get container status \"71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151\": rpc error: code = NotFound desc = could not find container \"71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151\": container with ID starting with 71b253813c0bbc8e3112e4b0fd981f7664514da71389a073d5ff790a35aaf151 not found: ID does not exist" Jan 30 16:45:51 crc kubenswrapper[4740]: I0130 16:45:51.100669 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb6738ee-fcc6-4fc0-9af2-a39e31f08388" (UID: "cb6738ee-fcc6-4fc0-9af2-a39e31f08388"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:45:51 crc kubenswrapper[4740]: I0130 16:45:51.189224 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb6738ee-fcc6-4fc0-9af2-a39e31f08388-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:45:51 crc kubenswrapper[4740]: I0130 16:45:51.321347 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:51 crc kubenswrapper[4740]: I0130 16:45:51.331246 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zfvfk"] Jan 30 16:45:51 crc kubenswrapper[4740]: I0130 16:45:51.348207 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" path="/var/lib/kubelet/pods/cb6738ee-fcc6-4fc0-9af2-a39e31f08388/volumes" Jan 30 16:45:54 crc kubenswrapper[4740]: I0130 16:45:54.455005 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:45:54 crc kubenswrapper[4740]: I0130 16:45:54.455470 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.455449 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.456510 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.456584 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.457827 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.457888 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" gracePeriod=600 Jan 30 16:46:24 crc kubenswrapper[4740]: E0130 16:46:24.589809 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.744882 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" exitCode=0 Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.744941 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33"} Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.744989 4740 scope.go:117] "RemoveContainer" containerID="6537d50282566b70860426afe657529cd98e521ab8dfd7d02b1d9ffec4ef1d9c" Jan 30 16:46:24 crc kubenswrapper[4740]: I0130 16:46:24.745843 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:46:24 crc kubenswrapper[4740]: E0130 16:46:24.746130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:46:38 crc kubenswrapper[4740]: I0130 16:46:38.336769 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:46:38 crc kubenswrapper[4740]: E0130 16:46:38.338061 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:46:49 crc kubenswrapper[4740]: I0130 16:46:49.335913 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:46:49 crc kubenswrapper[4740]: E0130 16:46:49.336959 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:47:01 crc kubenswrapper[4740]: I0130 16:47:01.336091 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:47:01 crc kubenswrapper[4740]: E0130 16:47:01.336969 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.928423 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 16:47:13 crc kubenswrapper[4740]: E0130 16:47:13.930074 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="registry-server" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930100 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="registry-server" Jan 30 16:47:13 crc kubenswrapper[4740]: E0130 16:47:13.930118 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="extract-utilities" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930127 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="extract-utilities" Jan 30 16:47:13 crc kubenswrapper[4740]: E0130 16:47:13.930162 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="extract-content" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930170 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="extract-content" Jan 30 16:47:13 crc kubenswrapper[4740]: E0130 16:47:13.930221 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ccd837-ffdb-4155-b2ad-032ef3dfa49e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930231 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ccd837-ffdb-4155-b2ad-032ef3dfa49e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930659 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6738ee-fcc6-4fc0-9af2-a39e31f08388" containerName="registry-server" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.930678 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ccd837-ffdb-4155-b2ad-032ef3dfa49e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.931761 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.936484 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.936821 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.937029 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.937230 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nbdq2" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.945115 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.989898 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.989945 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:13 crc kubenswrapper[4740]: I0130 16:47:13.989983 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.092684 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.092817 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.092896 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093017 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093118 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093176 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85qf\" (UniqueName: \"kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093233 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093426 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.093487 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.094188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.094624 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.104505 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.196982 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197248 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197283 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197318 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85qf\" (UniqueName: \"kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197374 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197710 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.197953 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.198401 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.201188 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.202055 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.216553 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85qf\" (UniqueName: \"kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.233428 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.321981 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.336028 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:47:14 crc kubenswrapper[4740]: E0130 16:47:14.336844 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:47:14 crc kubenswrapper[4740]: I0130 16:47:14.893179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 16:47:15 crc kubenswrapper[4740]: I0130 16:47:15.278511 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bbb613b4-f2f3-4388-ae48-986e0281000f","Type":"ContainerStarted","Data":"df9373867e55878748803ac907db0b6c57eedc622b583d6f32ba49a80e869d3b"} Jan 30 16:47:24 crc kubenswrapper[4740]: I0130 16:47:24.215593 4740 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" podUID="9fa5493f-2e76-4fda-9a43-4d8e7828f2a7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:24 crc kubenswrapper[4740]: I0130 16:47:24.256713 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-jjtfm" podUID="9fa5493f-2e76-4fda-9a43-4d8e7828f2a7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:47:26 crc kubenswrapper[4740]: I0130 16:47:26.335790 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:47:26 crc kubenswrapper[4740]: E0130 16:47:26.336471 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:47:40 crc kubenswrapper[4740]: I0130 16:47:40.336567 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:47:40 crc kubenswrapper[4740]: E0130 16:47:40.337342 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:47:55 crc kubenswrapper[4740]: I0130 16:47:55.335756 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:47:55 crc kubenswrapper[4740]: E0130 16:47:55.336895 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:48:08 crc kubenswrapper[4740]: I0130 16:48:08.335471 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:48:08 crc kubenswrapper[4740]: E0130 16:48:08.336277 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:48:20 crc kubenswrapper[4740]: I0130 16:48:20.336164 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:48:20 crc kubenswrapper[4740]: E0130 16:48:20.336967 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:48:31 crc kubenswrapper[4740]: I0130 16:48:31.335851 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:48:31 crc kubenswrapper[4740]: E0130 16:48:31.337166 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:48:43 crc kubenswrapper[4740]: I0130 16:48:43.344067 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:48:43 crc kubenswrapper[4740]: E0130 16:48:43.344979 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:48:57 crc kubenswrapper[4740]: I0130 16:48:57.336321 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:48:57 crc kubenswrapper[4740]: E0130 16:48:57.337153 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:49:11 crc kubenswrapper[4740]: I0130 16:49:11.336591 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:49:11 crc kubenswrapper[4740]: E0130 16:49:11.337539 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:49:23 crc kubenswrapper[4740]: I0130 16:49:23.343952 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:49:23 crc kubenswrapper[4740]: E0130 16:49:23.344901 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:49:28 crc kubenswrapper[4740]: E0130 16:49:28.525499 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 16:49:28 crc kubenswrapper[4740]: E0130 16:49:28.527716 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r85qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(bbb613b4-f2f3-4388-ae48-986e0281000f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:49:28 crc kubenswrapper[4740]: E0130 16:49:28.529150 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="bbb613b4-f2f3-4388-ae48-986e0281000f" Jan 30 16:49:28 crc kubenswrapper[4740]: E0130 16:49:28.898273 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="bbb613b4-f2f3-4388-ae48-986e0281000f" Jan 30 16:49:36 crc kubenswrapper[4740]: I0130 16:49:36.336182 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:49:36 crc kubenswrapper[4740]: E0130 16:49:36.337805 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:49:40 crc kubenswrapper[4740]: I0130 16:49:40.340305 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:49:42 crc kubenswrapper[4740]: I0130 16:49:42.648964 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 16:49:44 crc kubenswrapper[4740]: I0130 16:49:44.065388 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bbb613b4-f2f3-4388-ae48-986e0281000f","Type":"ContainerStarted","Data":"1ddb44c2a787a3a8a43b98b1eff70eb7bf71b376f4ba0109188aef078d8afba9"} Jan 30 16:49:44 crc kubenswrapper[4740]: I0130 16:49:44.088616 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.351409781 podStartE2EDuration="2m32.088591714s" podCreationTimestamp="2026-01-30 16:47:12 +0000 UTC" firstStartedPulling="2026-01-30 16:47:14.908533107 +0000 UTC m=+3083.545595706" lastFinishedPulling="2026-01-30 16:49:42.64571504 +0000 UTC m=+3231.282777639" observedRunningTime="2026-01-30 16:49:44.085526868 +0000 UTC m=+3232.722589467" watchObservedRunningTime="2026-01-30 16:49:44.088591714 +0000 UTC m=+3232.725654313" Jan 30 16:49:49 crc kubenswrapper[4740]: I0130 16:49:49.336874 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:49:49 crc kubenswrapper[4740]: E0130 16:49:49.338100 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:50:01 crc kubenswrapper[4740]: I0130 16:50:01.336425 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:50:01 crc kubenswrapper[4740]: E0130 16:50:01.337795 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:50:16 crc kubenswrapper[4740]: I0130 16:50:16.336518 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:50:16 crc kubenswrapper[4740]: E0130 16:50:16.337920 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:50:29 crc kubenswrapper[4740]: I0130 16:50:29.336696 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:50:29 crc kubenswrapper[4740]: E0130 16:50:29.337978 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:50:43 crc kubenswrapper[4740]: I0130 16:50:43.343986 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:50:43 crc kubenswrapper[4740]: E0130 16:50:43.346469 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:50:55 crc kubenswrapper[4740]: I0130 16:50:55.336025 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:50:55 crc kubenswrapper[4740]: E0130 16:50:55.336924 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:51:09 crc kubenswrapper[4740]: I0130 16:51:09.335762 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:51:09 crc kubenswrapper[4740]: E0130 16:51:09.337008 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:51:20 crc kubenswrapper[4740]: I0130 16:51:20.335385 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:51:20 crc kubenswrapper[4740]: E0130 16:51:20.336156 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:51:34 crc kubenswrapper[4740]: I0130 16:51:34.336644 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:51:34 crc kubenswrapper[4740]: I0130 16:51:34.865856 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990"} Jan 30 16:53:43 crc kubenswrapper[4740]: I0130 16:53:43.913756 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:53:43 crc kubenswrapper[4740]: I0130 16:53:43.917471 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:43 crc kubenswrapper[4740]: I0130 16:53:43.943173 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.044862 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.044936 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9m79\" (UniqueName: \"kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.045136 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.147285 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.147540 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.147580 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9m79\" (UniqueName: \"kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.148195 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.148555 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.181319 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9m79\" (UniqueName: \"kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79\") pod \"redhat-operators-xj46h\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.255126 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:53:44 crc kubenswrapper[4740]: I0130 16:53:44.982662 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:53:45 crc kubenswrapper[4740]: I0130 16:53:45.471198 4740 generic.go:334] "Generic (PLEG): container finished" podID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerID="0ffe3695a9f0fc6eb1ba717bf8def731995e5bf715d8b8709a61cd94bae0230f" exitCode=0 Jan 30 16:53:45 crc kubenswrapper[4740]: I0130 16:53:45.471377 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerDied","Data":"0ffe3695a9f0fc6eb1ba717bf8def731995e5bf715d8b8709a61cd94bae0230f"} Jan 30 16:53:45 crc kubenswrapper[4740]: I0130 16:53:45.471868 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerStarted","Data":"04298bad0bb0ae625c9a73f724398ad35a6c3e729f73b8c6e0b26d99b81ca811"} Jan 30 16:53:48 crc kubenswrapper[4740]: I0130 16:53:48.538606 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerStarted","Data":"39c64da7b52df7d6442a2ebc79dc2371e9198ce373489ffe670f55be10b26df3"} Jan 30 16:53:54 crc kubenswrapper[4740]: I0130 16:53:54.455044 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:53:54 crc kubenswrapper[4740]: I0130 16:53:54.455631 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:53:58 crc kubenswrapper[4740]: I0130 16:53:58.657698 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerDied","Data":"39c64da7b52df7d6442a2ebc79dc2371e9198ce373489ffe670f55be10b26df3"} Jan 30 16:53:58 crc kubenswrapper[4740]: I0130 16:53:58.657644 4740 generic.go:334] "Generic (PLEG): container finished" podID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerID="39c64da7b52df7d6442a2ebc79dc2371e9198ce373489ffe670f55be10b26df3" exitCode=0 Jan 30 16:53:59 crc kubenswrapper[4740]: I0130 16:53:59.674091 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerStarted","Data":"c955ad111eee1210ee78f789446f1fab13b1713aeb39e69d2ebc489488e8060d"} Jan 30 16:53:59 crc kubenswrapper[4740]: I0130 16:53:59.697029 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xj46h" podStartSLOduration=2.881493642 podStartE2EDuration="16.697003071s" podCreationTimestamp="2026-01-30 16:53:43 +0000 UTC" firstStartedPulling="2026-01-30 16:53:45.47749023 +0000 UTC m=+3474.114552819" lastFinishedPulling="2026-01-30 16:53:59.292999649 +0000 UTC m=+3487.930062248" observedRunningTime="2026-01-30 16:53:59.695213496 +0000 UTC m=+3488.332276095" watchObservedRunningTime="2026-01-30 16:53:59.697003071 +0000 UTC m=+3488.334065670" Jan 30 16:54:04 crc kubenswrapper[4740]: I0130 16:54:04.255880 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:04 crc kubenswrapper[4740]: I0130 16:54:04.257534 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:05 crc kubenswrapper[4740]: I0130 16:54:05.309112 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xj46h" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="registry-server" probeResult="failure" output=< Jan 30 16:54:05 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 16:54:05 crc kubenswrapper[4740]: > Jan 30 16:54:14 crc kubenswrapper[4740]: I0130 16:54:14.317531 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:14 crc kubenswrapper[4740]: I0130 16:54:14.380788 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:15 crc kubenswrapper[4740]: I0130 16:54:15.098028 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:54:15 crc kubenswrapper[4740]: I0130 16:54:15.848768 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xj46h" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="registry-server" containerID="cri-o://c955ad111eee1210ee78f789446f1fab13b1713aeb39e69d2ebc489488e8060d" gracePeriod=2 Jan 30 16:54:16 crc kubenswrapper[4740]: I0130 16:54:16.882570 4740 generic.go:334] "Generic (PLEG): container finished" podID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerID="c955ad111eee1210ee78f789446f1fab13b1713aeb39e69d2ebc489488e8060d" exitCode=0 Jan 30 16:54:16 crc kubenswrapper[4740]: I0130 16:54:16.882909 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerDied","Data":"c955ad111eee1210ee78f789446f1fab13b1713aeb39e69d2ebc489488e8060d"} Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.348166 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.543405 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities\") pod \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.543602 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content\") pod \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.543766 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9m79\" (UniqueName: \"kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79\") pod \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\" (UID: \"e849422b-6fb4-4a83-b4f5-dd752d0b048f\") " Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.544537 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities" (OuterVolumeSpecName: "utilities") pod "e849422b-6fb4-4a83-b4f5-dd752d0b048f" (UID: "e849422b-6fb4-4a83-b4f5-dd752d0b048f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.545652 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.564772 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79" (OuterVolumeSpecName: "kube-api-access-r9m79") pod "e849422b-6fb4-4a83-b4f5-dd752d0b048f" (UID: "e849422b-6fb4-4a83-b4f5-dd752d0b048f"). InnerVolumeSpecName "kube-api-access-r9m79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.648168 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9m79\" (UniqueName: \"kubernetes.io/projected/e849422b-6fb4-4a83-b4f5-dd752d0b048f-kube-api-access-r9m79\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.699647 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e849422b-6fb4-4a83-b4f5-dd752d0b048f" (UID: "e849422b-6fb4-4a83-b4f5-dd752d0b048f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.750043 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e849422b-6fb4-4a83-b4f5-dd752d0b048f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.896308 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xj46h" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.896438 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xj46h" event={"ID":"e849422b-6fb4-4a83-b4f5-dd752d0b048f","Type":"ContainerDied","Data":"04298bad0bb0ae625c9a73f724398ad35a6c3e729f73b8c6e0b26d99b81ca811"} Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.896780 4740 scope.go:117] "RemoveContainer" containerID="c955ad111eee1210ee78f789446f1fab13b1713aeb39e69d2ebc489488e8060d" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.928403 4740 scope.go:117] "RemoveContainer" containerID="39c64da7b52df7d6442a2ebc79dc2371e9198ce373489ffe670f55be10b26df3" Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.937956 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.948422 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xj46h"] Jan 30 16:54:17 crc kubenswrapper[4740]: I0130 16:54:17.970858 4740 scope.go:117] "RemoveContainer" containerID="0ffe3695a9f0fc6eb1ba717bf8def731995e5bf715d8b8709a61cd94bae0230f" Jan 30 16:54:19 crc kubenswrapper[4740]: I0130 16:54:19.353114 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" path="/var/lib/kubelet/pods/e849422b-6fb4-4a83-b4f5-dd752d0b048f/volumes" Jan 30 16:54:24 crc kubenswrapper[4740]: I0130 16:54:24.454218 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:54:24 crc kubenswrapper[4740]: I0130 16:54:24.454797 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:54:54 crc kubenswrapper[4740]: I0130 16:54:54.454829 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:54:54 crc kubenswrapper[4740]: I0130 16:54:54.455498 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:54:54 crc kubenswrapper[4740]: I0130 16:54:54.455581 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:54:54 crc kubenswrapper[4740]: I0130 16:54:54.456708 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:54:54 crc kubenswrapper[4740]: I0130 16:54:54.456782 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990" gracePeriod=600 Jan 30 16:54:55 crc kubenswrapper[4740]: I0130 16:54:55.359976 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990" exitCode=0 Jan 30 16:54:55 crc kubenswrapper[4740]: I0130 16:54:55.360065 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990"} Jan 30 16:54:55 crc kubenswrapper[4740]: I0130 16:54:55.360943 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a"} Jan 30 16:54:55 crc kubenswrapper[4740]: I0130 16:54:55.361074 4740 scope.go:117] "RemoveContainer" containerID="d1cb98b163fac0c36edaafecc7a38f3c20c92e2cb61738b7b0cb903ac27ead33" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.299556 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:54:57 crc kubenswrapper[4740]: E0130 16:54:57.300796 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="extract-utilities" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.300816 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="extract-utilities" Jan 30 16:54:57 crc kubenswrapper[4740]: E0130 16:54:57.300834 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="registry-server" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.300840 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="registry-server" Jan 30 16:54:57 crc kubenswrapper[4740]: E0130 16:54:57.300877 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="extract-content" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.300886 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="extract-content" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.301098 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e849422b-6fb4-4a83-b4f5-dd752d0b048f" containerName="registry-server" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.308441 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.409759 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.485269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.485729 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9d5\" (UniqueName: \"kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.485928 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.587968 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.588159 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.588215 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9d5\" (UniqueName: \"kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.590067 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.590578 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.623394 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9d5\" (UniqueName: \"kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5\") pod \"certified-operators-bcpvk\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:57 crc kubenswrapper[4740]: I0130 16:54:57.650180 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:54:58 crc kubenswrapper[4740]: I0130 16:54:58.444374 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:54:59 crc kubenswrapper[4740]: I0130 16:54:59.443728 4740 generic.go:334] "Generic (PLEG): container finished" podID="47344a02-7482-418d-9857-4114a47fb8f5" containerID="befe007dabdc29bd439c4968edb4e0ae6910b3105a2b8df20a4f1375bdc716be" exitCode=0 Jan 30 16:54:59 crc kubenswrapper[4740]: I0130 16:54:59.443817 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerDied","Data":"befe007dabdc29bd439c4968edb4e0ae6910b3105a2b8df20a4f1375bdc716be"} Jan 30 16:54:59 crc kubenswrapper[4740]: I0130 16:54:59.444593 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerStarted","Data":"2ce3b0cb5c2e963e68a62d328c064313083d978d4be25e3ab3f0c6643041a5e1"} Jan 30 16:54:59 crc kubenswrapper[4740]: I0130 16:54:59.447208 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:55:01 crc kubenswrapper[4740]: I0130 16:55:01.494814 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerStarted","Data":"389da2b92b2f33f2153e55ac58f7e0745190ce506d47eed6d2a774135d92288a"} Jan 30 16:55:04 crc kubenswrapper[4740]: I0130 16:55:04.532856 4740 generic.go:334] "Generic (PLEG): container finished" podID="47344a02-7482-418d-9857-4114a47fb8f5" containerID="389da2b92b2f33f2153e55ac58f7e0745190ce506d47eed6d2a774135d92288a" exitCode=0 Jan 30 16:55:04 crc kubenswrapper[4740]: I0130 16:55:04.532899 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerDied","Data":"389da2b92b2f33f2153e55ac58f7e0745190ce506d47eed6d2a774135d92288a"} Jan 30 16:55:05 crc kubenswrapper[4740]: I0130 16:55:05.592505 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerStarted","Data":"e973c21b4e3110ad7a6b6901ee15bf53ec987148d885004f36662893b3be198f"} Jan 30 16:55:05 crc kubenswrapper[4740]: I0130 16:55:05.638721 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bcpvk" podStartSLOduration=3.112935738 podStartE2EDuration="8.638691613s" podCreationTimestamp="2026-01-30 16:54:57 +0000 UTC" firstStartedPulling="2026-01-30 16:54:59.446883508 +0000 UTC m=+3548.083946107" lastFinishedPulling="2026-01-30 16:55:04.972639383 +0000 UTC m=+3553.609701982" observedRunningTime="2026-01-30 16:55:05.636101848 +0000 UTC m=+3554.273164447" watchObservedRunningTime="2026-01-30 16:55:05.638691613 +0000 UTC m=+3554.275754212" Jan 30 16:55:07 crc kubenswrapper[4740]: I0130 16:55:07.650911 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:07 crc kubenswrapper[4740]: I0130 16:55:07.651313 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:07 crc kubenswrapper[4740]: I0130 16:55:07.708518 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.161485 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.164813 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.197863 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.263591 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnxd2\" (UniqueName: \"kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.264013 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.264241 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.366862 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.366944 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.367149 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnxd2\" (UniqueName: \"kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.367984 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.368671 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.392224 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnxd2\" (UniqueName: \"kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2\") pod \"redhat-marketplace-z2bcv\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:16 crc kubenswrapper[4740]: I0130 16:55:16.511863 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:17 crc kubenswrapper[4740]: I0130 16:55:17.157133 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:17 crc kubenswrapper[4740]: I0130 16:55:17.715497 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:17 crc kubenswrapper[4740]: I0130 16:55:17.764612 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerID="a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57" exitCode=0 Jan 30 16:55:17 crc kubenswrapper[4740]: I0130 16:55:17.764671 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerDied","Data":"a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57"} Jan 30 16:55:17 crc kubenswrapper[4740]: I0130 16:55:17.764706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerStarted","Data":"b4e16e243b1746377a4528e6d23b9e20c576618a00e0a244e2d42baadab8ff30"} Jan 30 16:55:19 crc kubenswrapper[4740]: I0130 16:55:19.790431 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerID="6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c" exitCode=0 Jan 30 16:55:19 crc kubenswrapper[4740]: I0130 16:55:19.790542 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerDied","Data":"6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c"} Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.146777 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.148943 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bcpvk" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="registry-server" containerID="cri-o://e973c21b4e3110ad7a6b6901ee15bf53ec987148d885004f36662893b3be198f" gracePeriod=2 Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.806322 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerStarted","Data":"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19"} Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.809154 4740 generic.go:334] "Generic (PLEG): container finished" podID="47344a02-7482-418d-9857-4114a47fb8f5" containerID="e973c21b4e3110ad7a6b6901ee15bf53ec987148d885004f36662893b3be198f" exitCode=0 Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.809211 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerDied","Data":"e973c21b4e3110ad7a6b6901ee15bf53ec987148d885004f36662893b3be198f"} Jan 30 16:55:20 crc kubenswrapper[4740]: I0130 16:55:20.842531 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z2bcv" podStartSLOduration=2.255156501 podStartE2EDuration="4.842499215s" podCreationTimestamp="2026-01-30 16:55:16 +0000 UTC" firstStartedPulling="2026-01-30 16:55:17.767285928 +0000 UTC m=+3566.404348527" lastFinishedPulling="2026-01-30 16:55:20.354628642 +0000 UTC m=+3568.991691241" observedRunningTime="2026-01-30 16:55:20.828891925 +0000 UTC m=+3569.465954524" watchObservedRunningTime="2026-01-30 16:55:20.842499215 +0000 UTC m=+3569.479561814" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.021171 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.194621 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content\") pod \"47344a02-7482-418d-9857-4114a47fb8f5\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.194869 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities\") pod \"47344a02-7482-418d-9857-4114a47fb8f5\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.195181 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg9d5\" (UniqueName: \"kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5\") pod \"47344a02-7482-418d-9857-4114a47fb8f5\" (UID: \"47344a02-7482-418d-9857-4114a47fb8f5\") " Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.195927 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities" (OuterVolumeSpecName: "utilities") pod "47344a02-7482-418d-9857-4114a47fb8f5" (UID: "47344a02-7482-418d-9857-4114a47fb8f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.204888 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5" (OuterVolumeSpecName: "kube-api-access-fg9d5") pod "47344a02-7482-418d-9857-4114a47fb8f5" (UID: "47344a02-7482-418d-9857-4114a47fb8f5"). InnerVolumeSpecName "kube-api-access-fg9d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.276083 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47344a02-7482-418d-9857-4114a47fb8f5" (UID: "47344a02-7482-418d-9857-4114a47fb8f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.297991 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg9d5\" (UniqueName: \"kubernetes.io/projected/47344a02-7482-418d-9857-4114a47fb8f5-kube-api-access-fg9d5\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.298039 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.298053 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47344a02-7482-418d-9857-4114a47fb8f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.854566 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcpvk" event={"ID":"47344a02-7482-418d-9857-4114a47fb8f5","Type":"ContainerDied","Data":"2ce3b0cb5c2e963e68a62d328c064313083d978d4be25e3ab3f0c6643041a5e1"} Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.854950 4740 scope.go:117] "RemoveContainer" containerID="e973c21b4e3110ad7a6b6901ee15bf53ec987148d885004f36662893b3be198f" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.854714 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcpvk" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.896636 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.907643 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bcpvk"] Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.954491 4740 scope.go:117] "RemoveContainer" containerID="389da2b92b2f33f2153e55ac58f7e0745190ce506d47eed6d2a774135d92288a" Jan 30 16:55:21 crc kubenswrapper[4740]: I0130 16:55:21.990178 4740 scope.go:117] "RemoveContainer" containerID="befe007dabdc29bd439c4968edb4e0ae6910b3105a2b8df20a4f1375bdc716be" Jan 30 16:55:23 crc kubenswrapper[4740]: I0130 16:55:23.348156 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47344a02-7482-418d-9857-4114a47fb8f5" path="/var/lib/kubelet/pods/47344a02-7482-418d-9857-4114a47fb8f5/volumes" Jan 30 16:55:26 crc kubenswrapper[4740]: I0130 16:55:26.512405 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:26 crc kubenswrapper[4740]: I0130 16:55:26.512960 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:26 crc kubenswrapper[4740]: I0130 16:55:26.585553 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:26 crc kubenswrapper[4740]: I0130 16:55:26.972114 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:27 crc kubenswrapper[4740]: I0130 16:55:27.754421 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:28 crc kubenswrapper[4740]: I0130 16:55:28.939954 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z2bcv" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="registry-server" containerID="cri-o://c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19" gracePeriod=2 Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.746635 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.914683 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities\") pod \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.914862 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnxd2\" (UniqueName: \"kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2\") pod \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.914903 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content\") pod \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\" (UID: \"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428\") " Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.917692 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities" (OuterVolumeSpecName: "utilities") pod "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" (UID: "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.930686 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2" (OuterVolumeSpecName: "kube-api-access-tnxd2") pod "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" (UID: "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428"). InnerVolumeSpecName "kube-api-access-tnxd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.954590 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" (UID: "ba89fa2e-600b-4c8f-b61c-a99aeb5f4428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.984063 4740 generic.go:334] "Generic (PLEG): container finished" podID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerID="c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19" exitCode=0 Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.984145 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerDied","Data":"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19"} Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.984183 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z2bcv" event={"ID":"ba89fa2e-600b-4c8f-b61c-a99aeb5f4428","Type":"ContainerDied","Data":"b4e16e243b1746377a4528e6d23b9e20c576618a00e0a244e2d42baadab8ff30"} Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.984236 4740 scope.go:117] "RemoveContainer" containerID="c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19" Jan 30 16:55:29 crc kubenswrapper[4740]: I0130 16:55:29.984533 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z2bcv" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.018595 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.018675 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnxd2\" (UniqueName: \"kubernetes.io/projected/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-kube-api-access-tnxd2\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.018687 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.043750 4740 scope.go:117] "RemoveContainer" containerID="6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.054710 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.081610 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z2bcv"] Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.083290 4740 scope.go:117] "RemoveContainer" containerID="a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.151229 4740 scope.go:117] "RemoveContainer" containerID="c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19" Jan 30 16:55:30 crc kubenswrapper[4740]: E0130 16:55:30.153635 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19\": container with ID starting with c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19 not found: ID does not exist" containerID="c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.153701 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19"} err="failed to get container status \"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19\": rpc error: code = NotFound desc = could not find container \"c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19\": container with ID starting with c5da10dfa617c7ac1adea279ac5259e90cff2c149dbfbd325966cf5d081e1b19 not found: ID does not exist" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.153929 4740 scope.go:117] "RemoveContainer" containerID="6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c" Jan 30 16:55:30 crc kubenswrapper[4740]: E0130 16:55:30.155826 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c\": container with ID starting with 6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c not found: ID does not exist" containerID="6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.155967 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c"} err="failed to get container status \"6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c\": rpc error: code = NotFound desc = could not find container \"6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c\": container with ID starting with 6912f2a0377e1ea4f30435f801fbdf41f69c833d0b53dba114301fd2d327ba3c not found: ID does not exist" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.156078 4740 scope.go:117] "RemoveContainer" containerID="a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57" Jan 30 16:55:30 crc kubenswrapper[4740]: E0130 16:55:30.157468 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57\": container with ID starting with a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57 not found: ID does not exist" containerID="a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57" Jan 30 16:55:30 crc kubenswrapper[4740]: I0130 16:55:30.157603 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57"} err="failed to get container status \"a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57\": rpc error: code = NotFound desc = could not find container \"a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57\": container with ID starting with a0f5a2ee9691db1bddaca98f55ed3a76ecdf6ebc5ce7009820c515d73b3a8a57 not found: ID does not exist" Jan 30 16:55:31 crc kubenswrapper[4740]: I0130 16:55:31.349035 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" path="/var/lib/kubelet/pods/ba89fa2e-600b-4c8f-b61c-a99aeb5f4428/volumes" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.118558 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119635 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="extract-content" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119651 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="extract-content" Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119693 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="extract-content" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119700 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="extract-content" Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119711 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="extract-utilities" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119718 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="extract-utilities" Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119726 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119734 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119755 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119760 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: E0130 16:56:03.119771 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="extract-utilities" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119777 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="extract-utilities" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.119982 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba89fa2e-600b-4c8f-b61c-a99aeb5f4428" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.120004 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="47344a02-7482-418d-9857-4114a47fb8f5" containerName="registry-server" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.121753 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.141946 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.299269 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.299516 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhxl\" (UniqueName: \"kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.299759 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.402128 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.402375 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhxl\" (UniqueName: \"kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.402439 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.403248 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.403471 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.453442 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhxl\" (UniqueName: \"kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl\") pod \"community-operators-cx475\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:03 crc kubenswrapper[4740]: I0130 16:56:03.753370 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:04 crc kubenswrapper[4740]: I0130 16:56:04.478033 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:05 crc kubenswrapper[4740]: I0130 16:56:05.360063 4740 generic.go:334] "Generic (PLEG): container finished" podID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerID="d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e" exitCode=0 Jan 30 16:56:05 crc kubenswrapper[4740]: I0130 16:56:05.360155 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerDied","Data":"d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e"} Jan 30 16:56:05 crc kubenswrapper[4740]: I0130 16:56:05.360418 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerStarted","Data":"59e86c644aa7e53266170a8f00ec84f3ca3728113ecbb63ad90d2d89e176b795"} Jan 30 16:56:08 crc kubenswrapper[4740]: I0130 16:56:08.404951 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerStarted","Data":"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324"} Jan 30 16:56:13 crc kubenswrapper[4740]: I0130 16:56:13.492983 4740 generic.go:334] "Generic (PLEG): container finished" podID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerID="f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324" exitCode=0 Jan 30 16:56:13 crc kubenswrapper[4740]: I0130 16:56:13.493200 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerDied","Data":"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324"} Jan 30 16:56:14 crc kubenswrapper[4740]: I0130 16:56:14.513223 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerStarted","Data":"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b"} Jan 30 16:56:14 crc kubenswrapper[4740]: I0130 16:56:14.609138 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cx475" podStartSLOduration=2.934523733 podStartE2EDuration="11.609092002s" podCreationTimestamp="2026-01-30 16:56:03 +0000 UTC" firstStartedPulling="2026-01-30 16:56:05.362635083 +0000 UTC m=+3613.999697682" lastFinishedPulling="2026-01-30 16:56:14.037203352 +0000 UTC m=+3622.674265951" observedRunningTime="2026-01-30 16:56:14.536534471 +0000 UTC m=+3623.173597090" watchObservedRunningTime="2026-01-30 16:56:14.609092002 +0000 UTC m=+3623.246154601" Jan 30 16:56:23 crc kubenswrapper[4740]: I0130 16:56:23.754195 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:23 crc kubenswrapper[4740]: I0130 16:56:23.754852 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:23 crc kubenswrapper[4740]: I0130 16:56:23.825463 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:24 crc kubenswrapper[4740]: I0130 16:56:24.738884 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:24 crc kubenswrapper[4740]: I0130 16:56:24.803652 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:26 crc kubenswrapper[4740]: I0130 16:56:26.688626 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cx475" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="registry-server" containerID="cri-o://1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b" gracePeriod=2 Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.594277 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.685916 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities\") pod \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.686503 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbhxl\" (UniqueName: \"kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl\") pod \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.686865 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content\") pod \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\" (UID: \"a9a385fd-977a-48d0-b854-cf27a74e2d7d\") " Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.687271 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities" (OuterVolumeSpecName: "utilities") pod "a9a385fd-977a-48d0-b854-cf27a74e2d7d" (UID: "a9a385fd-977a-48d0-b854-cf27a74e2d7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.688591 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.702698 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl" (OuterVolumeSpecName: "kube-api-access-nbhxl") pod "a9a385fd-977a-48d0-b854-cf27a74e2d7d" (UID: "a9a385fd-977a-48d0-b854-cf27a74e2d7d"). InnerVolumeSpecName "kube-api-access-nbhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.733372 4740 generic.go:334] "Generic (PLEG): container finished" podID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerID="1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b" exitCode=0 Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.733449 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerDied","Data":"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b"} Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.733490 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx475" event={"ID":"a9a385fd-977a-48d0-b854-cf27a74e2d7d","Type":"ContainerDied","Data":"59e86c644aa7e53266170a8f00ec84f3ca3728113ecbb63ad90d2d89e176b795"} Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.733520 4740 scope.go:117] "RemoveContainer" containerID="1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.733775 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx475" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.791856 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbhxl\" (UniqueName: \"kubernetes.io/projected/a9a385fd-977a-48d0-b854-cf27a74e2d7d-kube-api-access-nbhxl\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.804105 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9a385fd-977a-48d0-b854-cf27a74e2d7d" (UID: "a9a385fd-977a-48d0-b854-cf27a74e2d7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.816158 4740 scope.go:117] "RemoveContainer" containerID="f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.863533 4740 scope.go:117] "RemoveContainer" containerID="d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.893589 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a385fd-977a-48d0-b854-cf27a74e2d7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.920098 4740 scope.go:117] "RemoveContainer" containerID="1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b" Jan 30 16:56:27 crc kubenswrapper[4740]: E0130 16:56:27.920832 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b\": container with ID starting with 1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b not found: ID does not exist" containerID="1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.920887 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b"} err="failed to get container status \"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b\": rpc error: code = NotFound desc = could not find container \"1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b\": container with ID starting with 1fa260266b791dc11040a42ae9df569db6f1a89f3df47286cd2100d440f3420b not found: ID does not exist" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.920924 4740 scope.go:117] "RemoveContainer" containerID="f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324" Jan 30 16:56:27 crc kubenswrapper[4740]: E0130 16:56:27.921265 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324\": container with ID starting with f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324 not found: ID does not exist" containerID="f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.921312 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324"} err="failed to get container status \"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324\": rpc error: code = NotFound desc = could not find container \"f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324\": container with ID starting with f35afafb55805d1197f69c5e56467f13b99b04b90fd4faacaecde855b54b7324 not found: ID does not exist" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.921371 4740 scope.go:117] "RemoveContainer" containerID="d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e" Jan 30 16:56:27 crc kubenswrapper[4740]: E0130 16:56:27.922172 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e\": container with ID starting with d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e not found: ID does not exist" containerID="d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e" Jan 30 16:56:27 crc kubenswrapper[4740]: I0130 16:56:27.922207 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e"} err="failed to get container status \"d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e\": rpc error: code = NotFound desc = could not find container \"d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e\": container with ID starting with d71a3b274337a4cb61f9516d59721f488adb8421a343cc072feb83c66d76782e not found: ID does not exist" Jan 30 16:56:28 crc kubenswrapper[4740]: I0130 16:56:28.114757 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:28 crc kubenswrapper[4740]: I0130 16:56:28.127810 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cx475"] Jan 30 16:56:29 crc kubenswrapper[4740]: I0130 16:56:29.351170 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" path="/var/lib/kubelet/pods/a9a385fd-977a-48d0-b854-cf27a74e2d7d/volumes" Jan 30 16:56:54 crc kubenswrapper[4740]: I0130 16:56:54.454771 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:56:54 crc kubenswrapper[4740]: I0130 16:56:54.455361 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:57:24 crc kubenswrapper[4740]: I0130 16:57:24.455094 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:57:24 crc kubenswrapper[4740]: I0130 16:57:24.455903 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:57:39 crc kubenswrapper[4740]: I0130 16:57:39.576947 4740 generic.go:334] "Generic (PLEG): container finished" podID="bbb613b4-f2f3-4388-ae48-986e0281000f" containerID="1ddb44c2a787a3a8a43b98b1eff70eb7bf71b376f4ba0109188aef078d8afba9" exitCode=0 Jan 30 16:57:39 crc kubenswrapper[4740]: I0130 16:57:39.577048 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bbb613b4-f2f3-4388-ae48-986e0281000f","Type":"ContainerDied","Data":"1ddb44c2a787a3a8a43b98b1eff70eb7bf71b376f4ba0109188aef078d8afba9"} Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.464315 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.601960 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"bbb613b4-f2f3-4388-ae48-986e0281000f","Type":"ContainerDied","Data":"df9373867e55878748803ac907db0b6c57eedc622b583d6f32ba49a80e869d3b"} Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.602012 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9373867e55878748803ac907db0b6c57eedc622b583d6f32ba49a80e869d3b" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.602031 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.605914 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606042 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606097 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606242 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85qf\" (UniqueName: \"kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606504 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606614 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606771 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.606867 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config\") pod \"bbb613b4-f2f3-4388-ae48-986e0281000f\" (UID: \"bbb613b4-f2f3-4388-ae48-986e0281000f\") " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.607429 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data" (OuterVolumeSpecName: "config-data") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.607662 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.607889 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.607917 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.619892 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf" (OuterVolumeSpecName: "kube-api-access-r85qf") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "kube-api-access-r85qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.662902 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.673292 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.692225 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.715290 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85qf\" (UniqueName: \"kubernetes.io/projected/bbb613b4-f2f3-4388-ae48-986e0281000f-kube-api-access-r85qf\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.715332 4740 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.715389 4740 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.715404 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.718064 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.749190 4740 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.768842 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.818133 4740 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.818179 4740 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bbb613b4-f2f3-4388-ae48-986e0281000f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:41 crc kubenswrapper[4740]: I0130 16:57:41.818210 4740 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/bbb613b4-f2f3-4388-ae48-986e0281000f-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:42 crc kubenswrapper[4740]: I0130 16:57:42.065770 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "bbb613b4-f2f3-4388-ae48-986e0281000f" (UID: "bbb613b4-f2f3-4388-ae48-986e0281000f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:57:42 crc kubenswrapper[4740]: I0130 16:57:42.125877 4740 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/bbb613b4-f2f3-4388-ae48-986e0281000f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.133042 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 16:57:50 crc kubenswrapper[4740]: E0130 16:57:50.134196 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="extract-content" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134212 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="extract-content" Jan 30 16:57:50 crc kubenswrapper[4740]: E0130 16:57:50.134229 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbb613b4-f2f3-4388-ae48-986e0281000f" containerName="tempest-tests-tempest-tests-runner" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134235 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbb613b4-f2f3-4388-ae48-986e0281000f" containerName="tempest-tests-tempest-tests-runner" Jan 30 16:57:50 crc kubenswrapper[4740]: E0130 16:57:50.134260 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="extract-utilities" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134267 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="extract-utilities" Jan 30 16:57:50 crc kubenswrapper[4740]: E0130 16:57:50.134291 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="registry-server" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134296 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="registry-server" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134559 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbb613b4-f2f3-4388-ae48-986e0281000f" containerName="tempest-tests-tempest-tests-runner" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.134577 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a385fd-977a-48d0-b854-cf27a74e2d7d" containerName="registry-server" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.135442 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.138398 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nbdq2" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.146018 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.317159 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2lj\" (UniqueName: \"kubernetes.io/projected/7681e657-e354-4e35-8cd2-351cc51fdb4a-kube-api-access-cf2lj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.317319 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.419042 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.419334 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf2lj\" (UniqueName: \"kubernetes.io/projected/7681e657-e354-4e35-8cd2-351cc51fdb4a-kube-api-access-cf2lj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.419619 4740 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.441745 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf2lj\" (UniqueName: \"kubernetes.io/projected/7681e657-e354-4e35-8cd2-351cc51fdb4a-kube-api-access-cf2lj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.451720 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7681e657-e354-4e35-8cd2-351cc51fdb4a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.459261 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 16:57:50 crc kubenswrapper[4740]: I0130 16:57:50.964379 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 16:57:51 crc kubenswrapper[4740]: I0130 16:57:51.706649 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7681e657-e354-4e35-8cd2-351cc51fdb4a","Type":"ContainerStarted","Data":"ea0413fe769b390a83aa50c8e5930495926de0922e6acc4aa4dc1c8aabeb6728"} Jan 30 16:57:53 crc kubenswrapper[4740]: I0130 16:57:53.732369 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7681e657-e354-4e35-8cd2-351cc51fdb4a","Type":"ContainerStarted","Data":"b1dbde6c34314abcbfd3afdaf43395e177f86dd80974d97bdd2a3833d5031be5"} Jan 30 16:57:53 crc kubenswrapper[4740]: I0130 16:57:53.758280 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.826999978 podStartE2EDuration="3.758244038s" podCreationTimestamp="2026-01-30 16:57:50 +0000 UTC" firstStartedPulling="2026-01-30 16:57:50.970635909 +0000 UTC m=+3719.607698508" lastFinishedPulling="2026-01-30 16:57:52.901879969 +0000 UTC m=+3721.538942568" observedRunningTime="2026-01-30 16:57:53.746728621 +0000 UTC m=+3722.383791220" watchObservedRunningTime="2026-01-30 16:57:53.758244038 +0000 UTC m=+3722.395306657" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.455113 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.455195 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.455261 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.456342 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.456444 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" gracePeriod=600 Jan 30 16:57:54 crc kubenswrapper[4740]: E0130 16:57:54.586554 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.745452 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" exitCode=0 Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.745521 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a"} Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.745640 4740 scope.go:117] "RemoveContainer" containerID="4fc6a1061a3258056ce9f5b20530fdcefdf7354884a281b7cfed1c8fa6792990" Jan 30 16:57:54 crc kubenswrapper[4740]: I0130 16:57:54.746702 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:57:54 crc kubenswrapper[4740]: E0130 16:57:54.747103 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:06 crc kubenswrapper[4740]: I0130 16:58:06.335692 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:58:06 crc kubenswrapper[4740]: E0130 16:58:06.336897 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:17 crc kubenswrapper[4740]: I0130 16:58:17.336665 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:58:17 crc kubenswrapper[4740]: E0130 16:58:17.337501 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.736590 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk8qw/must-gather-7pfzw"] Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.739773 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.742703 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xk8qw"/"openshift-service-ca.crt" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.742730 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xk8qw"/"kube-root-ca.crt" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.786922 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xk8qw/must-gather-7pfzw"] Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.873946 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.874122 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25drt\" (UniqueName: \"kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.977134 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.977277 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25drt\" (UniqueName: \"kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:19 crc kubenswrapper[4740]: I0130 16:58:19.978627 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:20 crc kubenswrapper[4740]: I0130 16:58:20.005247 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25drt\" (UniqueName: \"kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt\") pod \"must-gather-7pfzw\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:20 crc kubenswrapper[4740]: I0130 16:58:20.072871 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 16:58:20 crc kubenswrapper[4740]: I0130 16:58:20.760112 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xk8qw/must-gather-7pfzw"] Jan 30 16:58:21 crc kubenswrapper[4740]: I0130 16:58:21.049159 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" event={"ID":"b3b057b9-5afa-4aae-80a0-2963a1a54b2a","Type":"ContainerStarted","Data":"ce519a96d597a0587f75cea1d6144606ac1cfa1f32b65e5a107b8762156c2af7"} Jan 30 16:58:30 crc kubenswrapper[4740]: I0130 16:58:30.335625 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:58:30 crc kubenswrapper[4740]: E0130 16:58:30.336442 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:31 crc kubenswrapper[4740]: I0130 16:58:31.162009 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" event={"ID":"b3b057b9-5afa-4aae-80a0-2963a1a54b2a","Type":"ContainerStarted","Data":"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70"} Jan 30 16:58:32 crc kubenswrapper[4740]: I0130 16:58:32.176263 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" event={"ID":"b3b057b9-5afa-4aae-80a0-2963a1a54b2a","Type":"ContainerStarted","Data":"57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0"} Jan 30 16:58:32 crc kubenswrapper[4740]: I0130 16:58:32.227135 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" podStartSLOduration=3.414915035 podStartE2EDuration="13.22709605s" podCreationTimestamp="2026-01-30 16:58:19 +0000 UTC" firstStartedPulling="2026-01-30 16:58:20.764883051 +0000 UTC m=+3749.401945650" lastFinishedPulling="2026-01-30 16:58:30.577064066 +0000 UTC m=+3759.214126665" observedRunningTime="2026-01-30 16:58:32.215538731 +0000 UTC m=+3760.852601330" watchObservedRunningTime="2026-01-30 16:58:32.22709605 +0000 UTC m=+3760.864158649" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.688990 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-bgvfq"] Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.691146 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.693376 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xk8qw"/"default-dockercfg-5pqx5" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.789848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgcp7\" (UniqueName: \"kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.790242 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.892192 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgcp7\" (UniqueName: \"kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.892788 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.892883 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:35 crc kubenswrapper[4740]: I0130 16:58:35.916814 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgcp7\" (UniqueName: \"kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7\") pod \"crc-debug-bgvfq\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:36 crc kubenswrapper[4740]: I0130 16:58:36.015947 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 16:58:36 crc kubenswrapper[4740]: W0130 16:58:36.056865 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e25f2c0_f05c_4c04_a07f_1d20acc6d311.slice/crio-2a9314b3df9840e3a16aaa0b566464bcde4d009d9ebc28d4687f9508edb05fbe WatchSource:0}: Error finding container 2a9314b3df9840e3a16aaa0b566464bcde4d009d9ebc28d4687f9508edb05fbe: Status 404 returned error can't find the container with id 2a9314b3df9840e3a16aaa0b566464bcde4d009d9ebc28d4687f9508edb05fbe Jan 30 16:58:36 crc kubenswrapper[4740]: I0130 16:58:36.238623 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" event={"ID":"6e25f2c0-f05c-4c04-a07f-1d20acc6d311","Type":"ContainerStarted","Data":"2a9314b3df9840e3a16aaa0b566464bcde4d009d9ebc28d4687f9508edb05fbe"} Jan 30 16:58:42 crc kubenswrapper[4740]: I0130 16:58:42.335557 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:58:42 crc kubenswrapper[4740]: E0130 16:58:42.336510 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:53 crc kubenswrapper[4740]: I0130 16:58:53.346027 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:58:53 crc kubenswrapper[4740]: E0130 16:58:53.346968 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:58:55 crc kubenswrapper[4740]: E0130 16:58:55.580118 4740 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 30 16:58:55 crc kubenswrapper[4740]: E0130 16:58:55.581100 4740 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgcp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-bgvfq_openshift-must-gather-xk8qw(6e25f2c0-f05c-4c04-a07f-1d20acc6d311): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 16:58:55 crc kubenswrapper[4740]: E0130 16:58:55.582407 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" Jan 30 16:58:56 crc kubenswrapper[4740]: E0130 16:58:56.575850 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" Jan 30 16:59:06 crc kubenswrapper[4740]: I0130 16:59:06.336448 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:59:06 crc kubenswrapper[4740]: E0130 16:59:06.339764 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:59:11 crc kubenswrapper[4740]: I0130 16:59:11.765966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" event={"ID":"6e25f2c0-f05c-4c04-a07f-1d20acc6d311","Type":"ContainerStarted","Data":"d67579df597cb9c5ec50bdd5a3a77d6948c0db10bf182509eace9db925125ca3"} Jan 30 16:59:11 crc kubenswrapper[4740]: I0130 16:59:11.806914 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" podStartSLOduration=2.843764254 podStartE2EDuration="36.806873213s" podCreationTimestamp="2026-01-30 16:58:35 +0000 UTC" firstStartedPulling="2026-01-30 16:58:36.060622518 +0000 UTC m=+3764.697685117" lastFinishedPulling="2026-01-30 16:59:10.023731477 +0000 UTC m=+3798.660794076" observedRunningTime="2026-01-30 16:59:11.783322775 +0000 UTC m=+3800.420385374" watchObservedRunningTime="2026-01-30 16:59:11.806873213 +0000 UTC m=+3800.443935812" Jan 30 16:59:21 crc kubenswrapper[4740]: I0130 16:59:21.336018 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:59:21 crc kubenswrapper[4740]: E0130 16:59:21.336754 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:59:33 crc kubenswrapper[4740]: I0130 16:59:33.346737 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:59:33 crc kubenswrapper[4740]: E0130 16:59:33.347589 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:59:46 crc kubenswrapper[4740]: I0130 16:59:46.336540 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:59:46 crc kubenswrapper[4740]: E0130 16:59:46.338083 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 16:59:59 crc kubenswrapper[4740]: I0130 16:59:59.335548 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 16:59:59 crc kubenswrapper[4740]: E0130 16:59:59.336760 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.196242 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc"] Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.203860 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.207289 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.207860 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.229190 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc"] Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.280708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4fg\" (UniqueName: \"kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.280916 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.280972 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.383226 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4fg\" (UniqueName: \"kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.383445 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.383519 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.384824 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.390050 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.404658 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4fg\" (UniqueName: \"kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg\") pod \"collect-profiles-29496540-7n7tc\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:00 crc kubenswrapper[4740]: I0130 17:00:00.541910 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:01 crc kubenswrapper[4740]: I0130 17:00:01.133155 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc"] Jan 30 17:00:01 crc kubenswrapper[4740]: I0130 17:00:01.563706 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" event={"ID":"b653a385-09a4-49c3-b5be-a4649ff8fbdd","Type":"ContainerStarted","Data":"843a24ae1027b70607e0a5862c186453ace9e6eab80cceccaf6e1ed8d362113a"} Jan 30 17:00:01 crc kubenswrapper[4740]: I0130 17:00:01.564129 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" event={"ID":"b653a385-09a4-49c3-b5be-a4649ff8fbdd","Type":"ContainerStarted","Data":"ac715eceafc77d342c3d38bffbe39cf926e71d22457fb44da2a1884f19f6b25e"} Jan 30 17:00:02 crc kubenswrapper[4740]: I0130 17:00:02.580563 4740 generic.go:334] "Generic (PLEG): container finished" podID="b653a385-09a4-49c3-b5be-a4649ff8fbdd" containerID="843a24ae1027b70607e0a5862c186453ace9e6eab80cceccaf6e1ed8d362113a" exitCode=0 Jan 30 17:00:02 crc kubenswrapper[4740]: I0130 17:00:02.580630 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" event={"ID":"b653a385-09a4-49c3-b5be-a4649ff8fbdd","Type":"ContainerDied","Data":"843a24ae1027b70607e0a5862c186453ace9e6eab80cceccaf6e1ed8d362113a"} Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.332547 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.416828 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume\") pod \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.416934 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume\") pod \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.417142 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm4fg\" (UniqueName: \"kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg\") pod \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\" (UID: \"b653a385-09a4-49c3-b5be-a4649ff8fbdd\") " Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.419193 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume" (OuterVolumeSpecName: "config-volume") pod "b653a385-09a4-49c3-b5be-a4649ff8fbdd" (UID: "b653a385-09a4-49c3-b5be-a4649ff8fbdd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.439723 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b653a385-09a4-49c3-b5be-a4649ff8fbdd" (UID: "b653a385-09a4-49c3-b5be-a4649ff8fbdd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.449637 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg" (OuterVolumeSpecName: "kube-api-access-zm4fg") pod "b653a385-09a4-49c3-b5be-a4649ff8fbdd" (UID: "b653a385-09a4-49c3-b5be-a4649ff8fbdd"). InnerVolumeSpecName "kube-api-access-zm4fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.520389 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b653a385-09a4-49c3-b5be-a4649ff8fbdd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.520454 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b653a385-09a4-49c3-b5be-a4649ff8fbdd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.520476 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm4fg\" (UniqueName: \"kubernetes.io/projected/b653a385-09a4-49c3-b5be-a4649ff8fbdd-kube-api-access-zm4fg\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.608248 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" event={"ID":"b653a385-09a4-49c3-b5be-a4649ff8fbdd","Type":"ContainerDied","Data":"ac715eceafc77d342c3d38bffbe39cf926e71d22457fb44da2a1884f19f6b25e"} Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.608313 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac715eceafc77d342c3d38bffbe39cf926e71d22457fb44da2a1884f19f6b25e" Jan 30 17:00:04 crc kubenswrapper[4740]: I0130 17:00:04.608428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-7n7tc" Jan 30 17:00:05 crc kubenswrapper[4740]: I0130 17:00:05.440415 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv"] Jan 30 17:00:05 crc kubenswrapper[4740]: I0130 17:00:05.454663 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496495-f6bsv"] Jan 30 17:00:07 crc kubenswrapper[4740]: I0130 17:00:07.360368 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d795672-b675-4220-a9a1-35a910a77f7b" path="/var/lib/kubelet/pods/6d795672-b675-4220-a9a1-35a910a77f7b/volumes" Jan 30 17:00:12 crc kubenswrapper[4740]: I0130 17:00:12.336379 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:00:12 crc kubenswrapper[4740]: E0130 17:00:12.337435 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:00:25 crc kubenswrapper[4740]: I0130 17:00:25.335986 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:00:25 crc kubenswrapper[4740]: E0130 17:00:25.336861 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:00:27 crc kubenswrapper[4740]: I0130 17:00:27.887949 4740 generic.go:334] "Generic (PLEG): container finished" podID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" containerID="d67579df597cb9c5ec50bdd5a3a77d6948c0db10bf182509eace9db925125ca3" exitCode=0 Jan 30 17:00:27 crc kubenswrapper[4740]: I0130 17:00:27.888635 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" event={"ID":"6e25f2c0-f05c-4c04-a07f-1d20acc6d311","Type":"ContainerDied","Data":"d67579df597cb9c5ec50bdd5a3a77d6948c0db10bf182509eace9db925125ca3"} Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.046714 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.091957 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-bgvfq"] Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.104742 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-bgvfq"] Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.159422 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgcp7\" (UniqueName: \"kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7\") pod \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.159499 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host\") pod \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\" (UID: \"6e25f2c0-f05c-4c04-a07f-1d20acc6d311\") " Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.159641 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host" (OuterVolumeSpecName: "host") pod "6e25f2c0-f05c-4c04-a07f-1d20acc6d311" (UID: "6e25f2c0-f05c-4c04-a07f-1d20acc6d311"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.160549 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.168608 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7" (OuterVolumeSpecName: "kube-api-access-pgcp7") pod "6e25f2c0-f05c-4c04-a07f-1d20acc6d311" (UID: "6e25f2c0-f05c-4c04-a07f-1d20acc6d311"). InnerVolumeSpecName "kube-api-access-pgcp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.262737 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgcp7\" (UniqueName: \"kubernetes.io/projected/6e25f2c0-f05c-4c04-a07f-1d20acc6d311-kube-api-access-pgcp7\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.358735 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" path="/var/lib/kubelet/pods/6e25f2c0-f05c-4c04-a07f-1d20acc6d311/volumes" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.912890 4740 scope.go:117] "RemoveContainer" containerID="d67579df597cb9c5ec50bdd5a3a77d6948c0db10bf182509eace9db925125ca3" Jan 30 17:00:29 crc kubenswrapper[4740]: I0130 17:00:29.912957 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-bgvfq" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.340438 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-xd9wb"] Jan 30 17:00:30 crc kubenswrapper[4740]: E0130 17:00:30.340889 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" containerName="container-00" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.340903 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" containerName="container-00" Jan 30 17:00:30 crc kubenswrapper[4740]: E0130 17:00:30.340942 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b653a385-09a4-49c3-b5be-a4649ff8fbdd" containerName="collect-profiles" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.340950 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b653a385-09a4-49c3-b5be-a4649ff8fbdd" containerName="collect-profiles" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.341192 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e25f2c0-f05c-4c04-a07f-1d20acc6d311" containerName="container-00" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.341219 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b653a385-09a4-49c3-b5be-a4649ff8fbdd" containerName="collect-profiles" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.342059 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.345413 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xk8qw"/"default-dockercfg-5pqx5" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.492397 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-schhm\" (UniqueName: \"kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.492709 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.594892 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-schhm\" (UniqueName: \"kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.594977 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.595277 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.615070 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-schhm\" (UniqueName: \"kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm\") pod \"crc-debug-xd9wb\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.660179 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:30 crc kubenswrapper[4740]: I0130 17:00:30.926697 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" event={"ID":"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15","Type":"ContainerStarted","Data":"e523df004ccfa84f36ae459e3057bd3a837a79fd600ba5e35e204341d7ddae30"} Jan 30 17:00:31 crc kubenswrapper[4740]: I0130 17:00:31.939445 4740 generic.go:334] "Generic (PLEG): container finished" podID="aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" containerID="7b4768d37f9b98c3d20cad0fa19329f7a71fc421520771a427709e915e67316a" exitCode=0 Jan 30 17:00:31 crc kubenswrapper[4740]: I0130 17:00:31.939503 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" event={"ID":"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15","Type":"ContainerDied","Data":"7b4768d37f9b98c3d20cad0fa19329f7a71fc421520771a427709e915e67316a"} Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.106377 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.158933 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-schhm\" (UniqueName: \"kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm\") pod \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.158995 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host\") pod \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\" (UID: \"aba37a27-7b9c-4bc7-b28c-3ab9b794ca15\") " Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.170219 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host" (OuterVolumeSpecName: "host") pod "aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" (UID: "aba37a27-7b9c-4bc7-b28c-3ab9b794ca15"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.174763 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm" (OuterVolumeSpecName: "kube-api-access-schhm") pod "aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" (UID: "aba37a27-7b9c-4bc7-b28c-3ab9b794ca15"). InnerVolumeSpecName "kube-api-access-schhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.262120 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-schhm\" (UniqueName: \"kubernetes.io/projected/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-kube-api-access-schhm\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.262160 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.877522 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-xd9wb"] Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.890289 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-xd9wb"] Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.962928 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e523df004ccfa84f36ae459e3057bd3a837a79fd600ba5e35e204341d7ddae30" Jan 30 17:00:33 crc kubenswrapper[4740]: I0130 17:00:33.962995 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-xd9wb" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.119290 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-dzgxr"] Jan 30 17:00:35 crc kubenswrapper[4740]: E0130 17:00:35.120308 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" containerName="container-00" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.120328 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" containerName="container-00" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.120648 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" containerName="container-00" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.121661 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.123857 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xk8qw"/"default-dockercfg-5pqx5" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.209450 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.209614 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmlt\" (UniqueName: \"kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.311833 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.311978 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.312420 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbmlt\" (UniqueName: \"kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.348448 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbmlt\" (UniqueName: \"kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt\") pod \"crc-debug-dzgxr\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.355367 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba37a27-7b9c-4bc7-b28c-3ab9b794ca15" path="/var/lib/kubelet/pods/aba37a27-7b9c-4bc7-b28c-3ab9b794ca15/volumes" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.444034 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.988975 4740 generic.go:334] "Generic (PLEG): container finished" podID="4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" containerID="e74d07bb7a3d619f67b7185b41cd5767858e22f7394b37eead48abaac1837d11" exitCode=0 Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.989318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" event={"ID":"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c","Type":"ContainerDied","Data":"e74d07bb7a3d619f67b7185b41cd5767858e22f7394b37eead48abaac1837d11"} Jan 30 17:00:35 crc kubenswrapper[4740]: I0130 17:00:35.989381 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" event={"ID":"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c","Type":"ContainerStarted","Data":"94e2f5449a0187d925fe5289b8e4d921c5fdf6f077fe3ba6873bc692e176e419"} Jan 30 17:00:36 crc kubenswrapper[4740]: I0130 17:00:36.062759 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-dzgxr"] Jan 30 17:00:36 crc kubenswrapper[4740]: I0130 17:00:36.076191 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk8qw/crc-debug-dzgxr"] Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.160038 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.268899 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host\") pod \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.269084 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host" (OuterVolumeSpecName: "host") pod "4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" (UID: "4333871a-c6f9-44b6-bd5d-a78ac7e26f7c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.269139 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbmlt\" (UniqueName: \"kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt\") pod \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\" (UID: \"4333871a-c6f9-44b6-bd5d-a78ac7e26f7c\") " Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.269921 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.277572 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt" (OuterVolumeSpecName: "kube-api-access-mbmlt") pod "4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" (UID: "4333871a-c6f9-44b6-bd5d-a78ac7e26f7c"). InnerVolumeSpecName "kube-api-access-mbmlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.335925 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:00:37 crc kubenswrapper[4740]: E0130 17:00:37.336281 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.351269 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" path="/var/lib/kubelet/pods/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c/volumes" Jan 30 17:00:37 crc kubenswrapper[4740]: I0130 17:00:37.375088 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbmlt\" (UniqueName: \"kubernetes.io/projected/4333871a-c6f9-44b6-bd5d-a78ac7e26f7c-kube-api-access-mbmlt\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4740]: I0130 17:00:38.011445 4740 scope.go:117] "RemoveContainer" containerID="e74d07bb7a3d619f67b7185b41cd5767858e22f7394b37eead48abaac1837d11" Jan 30 17:00:38 crc kubenswrapper[4740]: I0130 17:00:38.011482 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/crc-debug-dzgxr" Jan 30 17:00:38 crc kubenswrapper[4740]: I0130 17:00:38.266440 4740 scope.go:117] "RemoveContainer" containerID="9d588e41b1d7245c4f9367491522cd3e5663b3b60d0b11f5f2a9a1429684ff27" Jan 30 17:00:48 crc kubenswrapper[4740]: I0130 17:00:48.336454 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:00:48 crc kubenswrapper[4740]: E0130 17:00:48.337257 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.158011 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496541-jkgp5"] Jan 30 17:01:00 crc kubenswrapper[4740]: E0130 17:01:00.159313 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" containerName="container-00" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.159332 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" containerName="container-00" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.159652 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="4333871a-c6f9-44b6-bd5d-a78ac7e26f7c" containerName="container-00" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.160753 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.172735 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496541-jkgp5"] Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.249045 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.249133 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.249238 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dsg4\" (UniqueName: \"kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.249299 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.353066 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.354274 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.358442 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dsg4\" (UniqueName: \"kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.358674 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.359930 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.381472 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.382258 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.404181 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dsg4\" (UniqueName: \"kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4\") pod \"keystone-cron-29496541-jkgp5\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:00 crc kubenswrapper[4740]: I0130 17:01:00.483012 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:01 crc kubenswrapper[4740]: I0130 17:01:01.127295 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496541-jkgp5"] Jan 30 17:01:01 crc kubenswrapper[4740]: I0130 17:01:01.332080 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496541-jkgp5" event={"ID":"c0afeb76-6c8f-47c6-81b9-ec569da67517","Type":"ContainerStarted","Data":"89b90442769bd5a04dd7712fe3878637f8be20805970eed561c3de22a8ea87d1"} Jan 30 17:01:02 crc kubenswrapper[4740]: I0130 17:01:02.346966 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496541-jkgp5" event={"ID":"c0afeb76-6c8f-47c6-81b9-ec569da67517","Type":"ContainerStarted","Data":"392a5579faf22f5e47032de67622fd3df762a162c6ad84d373757a4b39117147"} Jan 30 17:01:02 crc kubenswrapper[4740]: I0130 17:01:02.400083 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496541-jkgp5" podStartSLOduration=2.400050605 podStartE2EDuration="2.400050605s" podCreationTimestamp="2026-01-30 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:02.393974483 +0000 UTC m=+3911.031037082" watchObservedRunningTime="2026-01-30 17:01:02.400050605 +0000 UTC m=+3911.037113204" Jan 30 17:01:03 crc kubenswrapper[4740]: I0130 17:01:03.352256 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:01:03 crc kubenswrapper[4740]: E0130 17:01:03.352596 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:01:08 crc kubenswrapper[4740]: I0130 17:01:08.424280 4740 generic.go:334] "Generic (PLEG): container finished" podID="c0afeb76-6c8f-47c6-81b9-ec569da67517" containerID="392a5579faf22f5e47032de67622fd3df762a162c6ad84d373757a4b39117147" exitCode=0 Jan 30 17:01:08 crc kubenswrapper[4740]: I0130 17:01:08.424362 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496541-jkgp5" event={"ID":"c0afeb76-6c8f-47c6-81b9-ec569da67517","Type":"ContainerDied","Data":"392a5579faf22f5e47032de67622fd3df762a162c6ad84d373757a4b39117147"} Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.136425 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.227251 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys\") pod \"c0afeb76-6c8f-47c6-81b9-ec569da67517\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.227324 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dsg4\" (UniqueName: \"kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4\") pod \"c0afeb76-6c8f-47c6-81b9-ec569da67517\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.227628 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle\") pod \"c0afeb76-6c8f-47c6-81b9-ec569da67517\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.227844 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data\") pod \"c0afeb76-6c8f-47c6-81b9-ec569da67517\" (UID: \"c0afeb76-6c8f-47c6-81b9-ec569da67517\") " Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.239571 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c0afeb76-6c8f-47c6-81b9-ec569da67517" (UID: "c0afeb76-6c8f-47c6-81b9-ec569da67517"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.268980 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4" (OuterVolumeSpecName: "kube-api-access-4dsg4") pod "c0afeb76-6c8f-47c6-81b9-ec569da67517" (UID: "c0afeb76-6c8f-47c6-81b9-ec569da67517"). InnerVolumeSpecName "kube-api-access-4dsg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.290676 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0afeb76-6c8f-47c6-81b9-ec569da67517" (UID: "c0afeb76-6c8f-47c6-81b9-ec569da67517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.326657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data" (OuterVolumeSpecName: "config-data") pod "c0afeb76-6c8f-47c6-81b9-ec569da67517" (UID: "c0afeb76-6c8f-47c6-81b9-ec569da67517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.332612 4740 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.332675 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dsg4\" (UniqueName: \"kubernetes.io/projected/c0afeb76-6c8f-47c6-81b9-ec569da67517-kube-api-access-4dsg4\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.332689 4740 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.332702 4740 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0afeb76-6c8f-47c6-81b9-ec569da67517-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.448964 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496541-jkgp5" event={"ID":"c0afeb76-6c8f-47c6-81b9-ec569da67517","Type":"ContainerDied","Data":"89b90442769bd5a04dd7712fe3878637f8be20805970eed561c3de22a8ea87d1"} Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.449369 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b90442769bd5a04dd7712fe3878637f8be20805970eed561c3de22a8ea87d1" Jan 30 17:01:10 crc kubenswrapper[4740]: I0130 17:01:10.449666 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496541-jkgp5" Jan 30 17:01:13 crc kubenswrapper[4740]: I0130 17:01:13.189522 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/init-config-reloader/0.log" Jan 30 17:01:13 crc kubenswrapper[4740]: I0130 17:01:13.618568 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/config-reloader/0.log" Jan 30 17:01:13 crc kubenswrapper[4740]: I0130 17:01:13.674258 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/alertmanager/0.log" Jan 30 17:01:13 crc kubenswrapper[4740]: I0130 17:01:13.793277 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/init-config-reloader/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.047713 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c9f844546-g6v8p_6c8ace4b-028d-45a5-af9d-360781681219/barbican-api/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.115370 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c9f844546-g6v8p_6c8ace4b-028d-45a5-af9d-360781681219/barbican-api-log/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.224316 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649cd9f6b8-lgj8x_92b93f04-34e0-47a3-af34-cd7e7717c444/barbican-keystone-listener/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.507249 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc6d874d7-q46r7_06bc0d0f-04a5-4703-97a4-6d44ccc42006/barbican-worker/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.551477 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649cd9f6b8-lgj8x_92b93f04-34e0-47a3-af34-cd7e7717c444/barbican-keystone-listener-log/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.675847 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc6d874d7-q46r7_06bc0d0f-04a5-4703-97a4-6d44ccc42006/barbican-worker-log/0.log" Jan 30 17:01:14 crc kubenswrapper[4740]: I0130 17:01:14.877911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv_1d25020c-4758-47af-a6c4-5c6cd3c1b74b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:15 crc kubenswrapper[4740]: I0130 17:01:15.417421 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/ceilometer-central-agent/0.log" Jan 30 17:01:15 crc kubenswrapper[4740]: I0130 17:01:15.656131 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/proxy-httpd/0.log" Jan 30 17:01:15 crc kubenswrapper[4740]: I0130 17:01:15.681738 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/ceilometer-notification-agent/0.log" Jan 30 17:01:15 crc kubenswrapper[4740]: I0130 17:01:15.746294 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/sg-core/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.018612 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22570e91-9697-47f0-81d5-c38551f883b2/cinder-api/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.042969 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22570e91-9697-47f0-81d5-c38551f883b2/cinder-api-log/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.249372 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_97251097-8f48-4938-ba55-ca2ad0e01a6f/cinder-scheduler/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.297967 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_97251097-8f48-4938-ba55-ca2ad0e01a6f/probe/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.654272 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b1ae2907-297d-49dc-99ed-eda202004650/cloudkitty-api/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.657367 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b1ae2907-297d-49dc-99ed-eda202004650/cloudkitty-api-log/0.log" Jan 30 17:01:16 crc kubenswrapper[4740]: I0130 17:01:16.986610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_770634d4-2799-4d23-b96d-9f7fa5286e72/loki-compactor/0.log" Jan 30 17:01:17 crc kubenswrapper[4740]: I0130 17:01:17.115607 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-ln5c7_2614d072-47f4-4ed5-bfca-df4e1c46c665/loki-distributor/0.log" Jan 30 17:01:17 crc kubenswrapper[4740]: I0130 17:01:17.335604 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:01:17 crc kubenswrapper[4740]: E0130 17:01:17.336405 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:01:17 crc kubenswrapper[4740]: I0130 17:01:17.461591 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2_e2829a20-2177-481a-9a86-73f8bb323661/gateway/0.log" Jan 30 17:01:17 crc kubenswrapper[4740]: I0130 17:01:17.631837 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt_d46b15b9-9ad3-4699-9358-44d48e09f824/gateway/0.log" Jan 30 17:01:17 crc kubenswrapper[4740]: I0130 17:01:17.922017 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_96208f50-7c8d-49c1-b235-def86e2ea52d/loki-index-gateway/0.log" Jan 30 17:01:18 crc kubenswrapper[4740]: I0130 17:01:18.172518 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1/loki-ingester/0.log" Jan 30 17:01:18 crc kubenswrapper[4740]: I0130 17:01:18.509167 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-z6wx2_471174e9-72cd-40a9-8502-103a233c0dbe/loki-querier/0.log" Jan 30 17:01:18 crc kubenswrapper[4740]: I0130 17:01:18.611472 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4_ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2/loki-query-frontend/0.log" Jan 30 17:01:19 crc kubenswrapper[4740]: I0130 17:01:19.167531 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r_2cf84dba-a4e6-413f-a6d5-81779c179d30/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:19 crc kubenswrapper[4740]: I0130 17:01:19.608164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw_b913a0e7-afaa-4afb-9520-7930587f3b2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:19 crc kubenswrapper[4740]: I0130 17:01:19.644565 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/init/0.log" Jan 30 17:01:19 crc kubenswrapper[4740]: I0130 17:01:19.965084 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/init/0.log" Jan 30 17:01:19 crc kubenswrapper[4740]: I0130 17:01:19.998298 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/dnsmasq-dns/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.086683 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d_c63be956-8703-45e6-8b81-1867d602a2d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.311495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e3b49e9-60b0-4090-a703-acbc21b9b6b0/glance-httpd/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.370410 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e3b49e9-60b0-4090-a703-acbc21b9b6b0/glance-log/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.719898 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3d01b7b2-9f95-43e7-abae-1b1acb9c817b/glance-log/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.749200 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3d01b7b2-9f95-43e7-abae-1b1acb9c817b/glance-httpd/0.log" Jan 30 17:01:20 crc kubenswrapper[4740]: I0130 17:01:20.854420 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f_dafe432a-92c3-4e2a-8e5b-6f4579049269/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:21 crc kubenswrapper[4740]: I0130 17:01:21.876618 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4bsfm_98c07536-da6e-495d-8148-949896f2b4e3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:22 crc kubenswrapper[4740]: I0130 17:01:22.243110 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496541-jkgp5_c0afeb76-6c8f-47c6-81b9-ec569da67517/keystone-cron/0.log" Jan 30 17:01:22 crc kubenswrapper[4740]: I0130 17:01:22.369810 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59f5786cfd-w4tqb_7f54d2dc-eb88-4049-8f40-4605058f7feb/keystone-api/0.log" Jan 30 17:01:22 crc kubenswrapper[4740]: I0130 17:01:22.477533 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1c39bfe6-b89f-4699-95ff-e79c94b13740/kube-state-metrics/0.log" Jan 30 17:01:22 crc kubenswrapper[4740]: I0130 17:01:22.645967 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp_198ac256-3459-4e44-9c68-9efd25cf1ec5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:23 crc kubenswrapper[4740]: I0130 17:01:23.545684 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586b4b4677-4tdp8_4876d8e9-6662-4958-bb1a-091307ccfd02/neutron-httpd/0.log" Jan 30 17:01:23 crc kubenswrapper[4740]: I0130 17:01:23.670634 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586b4b4677-4tdp8_4876d8e9-6662-4958-bb1a-091307ccfd02/neutron-api/0.log" Jan 30 17:01:23 crc kubenswrapper[4740]: I0130 17:01:23.842970 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw_c32077e1-24f2-46ea-868d-914b78472dfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:24 crc kubenswrapper[4740]: I0130 17:01:24.530136 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2be29a5b-4407-4ef0-bf73-538f62c7ae2e/nova-api-log/0.log" Jan 30 17:01:24 crc kubenswrapper[4740]: I0130 17:01:24.815520 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dfe42dad-1fc3-4802-8d95-2e764a6c2750/nova-cell0-conductor-conductor/0.log" Jan 30 17:01:24 crc kubenswrapper[4740]: I0130 17:01:24.856741 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2be29a5b-4407-4ef0-bf73-538f62c7ae2e/nova-api-api/0.log" Jan 30 17:01:25 crc kubenswrapper[4740]: I0130 17:01:25.236467 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8372d763-1fed-4ff1-a573-ae34f6758115/nova-cell1-conductor-conductor/0.log" Jan 30 17:01:25 crc kubenswrapper[4740]: I0130 17:01:25.435566 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae10fd41-d2ed-4133-a41f-ecab597498fa/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 17:01:25 crc kubenswrapper[4740]: I0130 17:01:25.638685 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-58bwn_ffb086ab-4d15-4da9-babd-b3f544f4a26b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:26 crc kubenswrapper[4740]: I0130 17:01:26.234889 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e1f00ea-3a1e-4684-ad0f-26180738550d/nova-metadata-log/0.log" Jan 30 17:01:27 crc kubenswrapper[4740]: I0130 17:01:27.073794 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_98ae8da8-b7e7-40ca-8116-a91dc003a22c/nova-scheduler-scheduler/0.log" Jan 30 17:01:27 crc kubenswrapper[4740]: I0130 17:01:27.244659 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/mysql-bootstrap/0.log" Jan 30 17:01:27 crc kubenswrapper[4740]: I0130 17:01:27.469362 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/mysql-bootstrap/0.log" Jan 30 17:01:27 crc kubenswrapper[4740]: I0130 17:01:27.530532 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/galera/0.log" Jan 30 17:01:27 crc kubenswrapper[4740]: I0130 17:01:27.925446 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/mysql-bootstrap/0.log" Jan 30 17:01:28 crc kubenswrapper[4740]: I0130 17:01:28.148580 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/mysql-bootstrap/0.log" Jan 30 17:01:28 crc kubenswrapper[4740]: I0130 17:01:28.345258 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e1f00ea-3a1e-4684-ad0f-26180738550d/nova-metadata-metadata/0.log" Jan 30 17:01:28 crc kubenswrapper[4740]: I0130 17:01:28.377437 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/galera/0.log" Jan 30 17:01:28 crc kubenswrapper[4740]: I0130 17:01:28.578501 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0/openstackclient/0.log" Jan 30 17:01:28 crc kubenswrapper[4740]: I0130 17:01:28.828164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8vhhm_25c16e6c-3931-4064-bf64-baf0759712a5/ovn-controller/0.log" Jan 30 17:01:29 crc kubenswrapper[4740]: I0130 17:01:29.021118 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fpfkt_12656704-b213-40b2-9520-58db055e7380/openstack-network-exporter/0.log" Jan 30 17:01:29 crc kubenswrapper[4740]: I0130 17:01:29.309257 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server-init/0.log" Jan 30 17:01:29 crc kubenswrapper[4740]: I0130 17:01:29.336461 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:01:29 crc kubenswrapper[4740]: E0130 17:01:29.336929 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.005458 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovs-vswitchd/0.log" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.090320 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server/0.log" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.152799 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server-init/0.log" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.416860 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7mgcd_bdddad8e-9863-4a79-9883-cd130b7fe9f2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.654214 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5972ae70-676c-4eca-a931-92f76fe6efe5/openstack-network-exporter/0.log" Jan 30 17:01:30 crc kubenswrapper[4740]: I0130 17:01:30.680837 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5972ae70-676c-4eca-a931-92f76fe6efe5/ovn-northd/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.019765 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2182168-2683-42dd-abfc-1d19d9079ca6/ovsdbserver-nb/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.020216 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2182168-2683-42dd-abfc-1d19d9079ca6/openstack-network-exporter/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.405609 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_36bbce3a-c121-4811-9a61-ab05b62dce0b/ovsdbserver-sb/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.441938 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_36bbce3a-c121-4811-9a61-ab05b62dce0b/openstack-network-exporter/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.893709 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7745b764-mmpkw_f4b64e71-6b99-4f78-9636-4996a1e4ecee/placement-api/0.log" Jan 30 17:01:31 crc kubenswrapper[4740]: I0130 17:01:31.924204 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7745b764-mmpkw_f4b64e71-6b99-4f78-9636-4996a1e4ecee/placement-log/0.log" Jan 30 17:01:32 crc kubenswrapper[4740]: I0130 17:01:32.207686 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/init-config-reloader/0.log" Jan 30 17:01:32 crc kubenswrapper[4740]: I0130 17:01:32.476484 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/init-config-reloader/0.log" Jan 30 17:01:32 crc kubenswrapper[4740]: I0130 17:01:32.940078 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/config-reloader/0.log" Jan 30 17:01:32 crc kubenswrapper[4740]: I0130 17:01:32.961945 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/prometheus/0.log" Jan 30 17:01:33 crc kubenswrapper[4740]: I0130 17:01:33.194928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/thanos-sidecar/0.log" Jan 30 17:01:33 crc kubenswrapper[4740]: I0130 17:01:33.449188 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/setup-container/0.log" Jan 30 17:01:33 crc kubenswrapper[4740]: I0130 17:01:33.711923 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/setup-container/0.log" Jan 30 17:01:33 crc kubenswrapper[4740]: I0130 17:01:33.775457 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_1f12295c-9646-4ff9-854d-542e75e78e5a/cloudkitty-proc/0.log" Jan 30 17:01:33 crc kubenswrapper[4740]: I0130 17:01:33.799848 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/rabbitmq/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.012403 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/setup-container/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.211450 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/setup-container/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.382371 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6_9f74f942-192f-46c2-b1fd-df038a2fd9e7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.382854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/rabbitmq/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.820325 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qwsf4_92f231c6-6140-49b3-89ba-65cf9472a1dd/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:34 crc kubenswrapper[4740]: I0130 17:01:34.955722 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6_e8918d38-5722-4b5b-9b52-5a18971aa5f1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:35 crc kubenswrapper[4740]: I0130 17:01:35.122287 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xqw8f_4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:35 crc kubenswrapper[4740]: I0130 17:01:35.235616 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sklln_f616041d-231d-409f-b1eb-bb0939ada6d6/ssh-known-hosts-edpm-deployment/0.log" Jan 30 17:01:35 crc kubenswrapper[4740]: I0130 17:01:35.486408 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77b9c5655-hbm7j_476176f1-b9ac-4d2d-90ea-7abfcea252c4/proxy-server/0.log" Jan 30 17:01:35 crc kubenswrapper[4740]: I0130 17:01:35.679118 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77b9c5655-hbm7j_476176f1-b9ac-4d2d-90ea-7abfcea252c4/proxy-httpd/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.354309 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9q76k_445dee53-61e3-43c6-b8a9-278954f963a2/swift-ring-rebalance/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.386612 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-reaper/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.398397 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-auditor/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.562966 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-replicator/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.657577 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-auditor/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.704530 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-replicator/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.710451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-server/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.846396 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-server/0.log" Jan 30 17:01:36 crc kubenswrapper[4740]: I0130 17:01:36.928917 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-updater/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.000711 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-auditor/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.043636 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-expirer/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.129310 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-replicator/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.261427 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-server/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.333832 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/rsync/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.375110 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-updater/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.693993 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/swift-recon-cron/0.log" Jan 30 17:01:37 crc kubenswrapper[4740]: I0130 17:01:37.946488 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b74gj_26ccd837-ffdb-4155-b2ad-032ef3dfa49e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:38 crc kubenswrapper[4740]: I0130 17:01:38.106542 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bbb613b4-f2f3-4388-ae48-986e0281000f/tempest-tests-tempest-tests-runner/0.log" Jan 30 17:01:38 crc kubenswrapper[4740]: I0130 17:01:38.185651 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7681e657-e354-4e35-8cd2-351cc51fdb4a/test-operator-logs-container/0.log" Jan 30 17:01:38 crc kubenswrapper[4740]: I0130 17:01:38.390672 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b_a5fa2ffd-a5ba-47a0-a095-bd8219667aa3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:01:40 crc kubenswrapper[4740]: I0130 17:01:40.902476 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_20cc1f1a-e021-42dd-b435-64eaf9cfa1d7/memcached/0.log" Jan 30 17:01:44 crc kubenswrapper[4740]: I0130 17:01:44.335870 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:01:44 crc kubenswrapper[4740]: E0130 17:01:44.336570 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:01:58 crc kubenswrapper[4740]: I0130 17:01:58.335660 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:01:58 crc kubenswrapper[4740]: E0130 17:01:58.336639 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:02:11 crc kubenswrapper[4740]: I0130 17:02:11.336759 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:02:11 crc kubenswrapper[4740]: E0130 17:02:11.337644 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:02:12 crc kubenswrapper[4740]: I0130 17:02:12.554837 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:02:12 crc kubenswrapper[4740]: I0130 17:02:12.724325 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:02:12 crc kubenswrapper[4740]: I0130 17:02:12.796020 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:02:12 crc kubenswrapper[4740]: I0130 17:02:12.797753 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:02:12 crc kubenswrapper[4740]: I0130 17:02:12.974559 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.029034 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/extract/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.035258 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.300228 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-tzdc2_b3f3f690-263c-406b-9651-b1d548a73010/manager/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.363164 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-xsqtm_4ffa4d95-fc8d-4352-9bb3-b74038d53453/manager/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.759994 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-q652d_7a1d5aff-da4c-4c0e-9616-44da3511eef2/manager/0.log" Jan 30 17:02:13 crc kubenswrapper[4740]: I0130 17:02:13.877162 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2cj65_de27448d-0b23-4bbb-81b2-7818361e53bf/manager/0.log" Jan 30 17:02:14 crc kubenswrapper[4740]: I0130 17:02:14.077806 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-jjtfm_9fa5493f-2e76-4fda-9a43-4d8e7828f2a7/manager/0.log" Jan 30 17:02:14 crc kubenswrapper[4740]: I0130 17:02:14.227828 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-g8sm9_97e430a6-ad51-4e80-999e-75e568b1d6b6/manager/0.log" Jan 30 17:02:14 crc kubenswrapper[4740]: I0130 17:02:14.610600 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-nwnsv_ac86533b-0c5a-4704-b497-6e7e1114d938/manager/0.log" Jan 30 17:02:14 crc kubenswrapper[4740]: I0130 17:02:14.674227 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6wz9h_736c30f6-a1e4-47aa-a6d0-713baf99ad69/manager/0.log" Jan 30 17:02:14 crc kubenswrapper[4740]: I0130 17:02:14.856221 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-d4hf5_88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.081500 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-w7jt2_b9648635-827e-4a21-8890-ba8b1772d7c4/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.291951 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-lqf5n_c35e116f-97e5-47ec-aa40-955321cb09d5/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.357832 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-v8885_b82bfd4e-e72e-4941-b8aa-1baae2433217/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.601587 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-tl627_a7ff8a9d-40f9-4354-aa10-e7e93907a0a5/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.602815 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bzjc5_6ba6b433-534d-4a14-9fbb-4418b1c39fd9/manager/0.log" Jan 30 17:02:15 crc kubenswrapper[4740]: I0130 17:02:15.947260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg_107dde7f-ab99-4981-ba7a-0c6756408b54/manager/0.log" Jan 30 17:02:16 crc kubenswrapper[4740]: I0130 17:02:16.110283 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8d66f78b7-26k2v_72ae6a1c-defc-4fa0-8526-6fa59b0b2138/operator/0.log" Jan 30 17:02:16 crc kubenswrapper[4740]: I0130 17:02:16.561672 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rwqps_6d98558b-10cb-4d22-ac8e-4db35ad5b364/registry-server/0.log" Jan 30 17:02:16 crc kubenswrapper[4740]: I0130 17:02:16.757461 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-dpf55_5a6574e0-d6db-4e3d-9203-c3b28694e68f/manager/0.log" Jan 30 17:02:16 crc kubenswrapper[4740]: I0130 17:02:16.997528 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-hszqm_b8a01322-677f-443a-83fd-6352c7523727/manager/0.log" Jan 30 17:02:17 crc kubenswrapper[4740]: I0130 17:02:17.351022 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bmhgb_0040ed18-716a-4452-8209-c45c497d7fae/operator/0.log" Jan 30 17:02:17 crc kubenswrapper[4740]: I0130 17:02:17.592946 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-92p8l_011e9da6-1efe-4002-91f3-0aa0923fa015/manager/0.log" Jan 30 17:02:18 crc kubenswrapper[4740]: I0130 17:02:18.012873 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-gtt5t_82688ddf-9d92-4ff1-873b-ca5766766189/manager/0.log" Jan 30 17:02:18 crc kubenswrapper[4740]: I0130 17:02:18.161608 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdc979b86-rndp8_4b1298c0-d749-42f3-97c1-ad1b19db8f96/manager/0.log" Jan 30 17:02:18 crc kubenswrapper[4740]: I0130 17:02:18.314055 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-6sl24_d6ebfaaf-00f6-430e-bcb2-b5041395a101/manager/0.log" Jan 30 17:02:18 crc kubenswrapper[4740]: I0130 17:02:18.428683 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-df45f6d5f-lc4fv_82b9c083-1154-46de-958e-6a7726aca988/manager/0.log" Jan 30 17:02:26 crc kubenswrapper[4740]: I0130 17:02:26.335258 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:02:26 crc kubenswrapper[4740]: E0130 17:02:26.336230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:02:41 crc kubenswrapper[4740]: I0130 17:02:41.336086 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:02:41 crc kubenswrapper[4740]: E0130 17:02:41.336944 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:02:49 crc kubenswrapper[4740]: I0130 17:02:49.778232 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-lpktp_0336ee48-8f1e-49ed-a021-a01446330b39/control-plane-machine-set-operator/0.log" Jan 30 17:02:50 crc kubenswrapper[4740]: I0130 17:02:50.119601 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dl6xs_be7f0e88-7c2e-4c1b-a617-9da27584b057/kube-rbac-proxy/0.log" Jan 30 17:02:50 crc kubenswrapper[4740]: I0130 17:02:50.132521 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dl6xs_be7f0e88-7c2e-4c1b-a617-9da27584b057/machine-api-operator/0.log" Jan 30 17:02:54 crc kubenswrapper[4740]: I0130 17:02:54.336151 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:02:54 crc kubenswrapper[4740]: E0130 17:02:54.338250 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:03:05 crc kubenswrapper[4740]: I0130 17:03:05.335917 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:03:06 crc kubenswrapper[4740]: I0130 17:03:06.953134 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95"} Jan 30 17:03:11 crc kubenswrapper[4740]: I0130 17:03:11.613524 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-4xwlh_38135331-191d-4ef6-a002-936b6b4a17b3/cert-manager-controller/0.log" Jan 30 17:03:11 crc kubenswrapper[4740]: I0130 17:03:11.986428 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7dg58_72764858-c1a4-408a-887a-c48ad0b4d10a/cert-manager-cainjector/0.log" Jan 30 17:03:12 crc kubenswrapper[4740]: I0130 17:03:12.315741 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mn7d8_3979d983-a849-4be3-a862-caed0065a705/cert-manager-webhook/0.log" Jan 30 17:03:34 crc kubenswrapper[4740]: I0130 17:03:34.364407 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-b7cbr_0f05dfb1-ebdb-4b8d-8699-1b254807132b/nmstate-console-plugin/0.log" Jan 30 17:03:34 crc kubenswrapper[4740]: I0130 17:03:34.641559 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hs8jj_479ee03d-d745-43ab-83d0-46f6e4cf1a21/nmstate-handler/0.log" Jan 30 17:03:34 crc kubenswrapper[4740]: I0130 17:03:34.758045 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lb5hz_01faa5ac-05c7-44cf-a393-e67e5e47c683/kube-rbac-proxy/0.log" Jan 30 17:03:34 crc kubenswrapper[4740]: I0130 17:03:34.861485 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lb5hz_01faa5ac-05c7-44cf-a393-e67e5e47c683/nmstate-metrics/0.log" Jan 30 17:03:34 crc kubenswrapper[4740]: I0130 17:03:34.913599 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dw9f7_27f0888f-27f8-4ebe-86ed-a07a0995a241/nmstate-operator/0.log" Jan 30 17:03:35 crc kubenswrapper[4740]: I0130 17:03:35.116522 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tf56m_3ba55d47-87a4-4a5e-b3a7-9a737aef9125/nmstate-webhook/0.log" Jan 30 17:03:52 crc kubenswrapper[4740]: I0130 17:03:52.605442 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/kube-rbac-proxy/0.log" Jan 30 17:03:52 crc kubenswrapper[4740]: I0130 17:03:52.665754 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/manager/0.log" Jan 30 17:04:08 crc kubenswrapper[4740]: I0130 17:04:08.411390 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fl52v_7fed297b-1b60-4fa1-81ad-f7aff661624d/prometheus-operator/0.log" Jan 30 17:04:08 crc kubenswrapper[4740]: I0130 17:04:08.589225 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_e70968d1-7497-4724-9c80-cf5abdf288ea/prometheus-operator-admission-webhook/0.log" Jan 30 17:04:08 crc kubenswrapper[4740]: I0130 17:04:08.638739 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_27f815e6-2917-46af-8a6d-4bcd66c35042/prometheus-operator-admission-webhook/0.log" Jan 30 17:04:08 crc kubenswrapper[4740]: I0130 17:04:08.820760 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pdgvg_6a0acde2-70b4-4622-a609-290cbc5f253f/operator/0.log" Jan 30 17:04:08 crc kubenswrapper[4740]: I0130 17:04:08.876426 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r2zbm_522756c7-f451-4879-b2b3-2d19b80cb751/perses-operator/0.log" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.582528 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:04:25 crc kubenswrapper[4740]: E0130 17:04:25.583637 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0afeb76-6c8f-47c6-81b9-ec569da67517" containerName="keystone-cron" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.583659 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0afeb76-6c8f-47c6-81b9-ec569da67517" containerName="keystone-cron" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.583946 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0afeb76-6c8f-47c6-81b9-ec569da67517" containerName="keystone-cron" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.585734 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.601177 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.679847 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.680134 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsjk\" (UniqueName: \"kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.680198 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.782403 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.782564 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsjk\" (UniqueName: \"kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.782652 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.783077 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.783154 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.803165 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsjk\" (UniqueName: \"kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk\") pod \"redhat-operators-lx474\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:25 crc kubenswrapper[4740]: I0130 17:04:25.919234 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:26 crc kubenswrapper[4740]: I0130 17:04:26.406913 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-spzrl_2516854f-e7b5-4af2-a473-72ad1644043a/kube-rbac-proxy/0.log" Jan 30 17:04:26 crc kubenswrapper[4740]: I0130 17:04:26.532179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:04:26 crc kubenswrapper[4740]: I0130 17:04:26.537531 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-spzrl_2516854f-e7b5-4af2-a473-72ad1644043a/controller/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.119311 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.354744 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.443187 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.471889 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.485872 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.654673 4740 generic.go:334] "Generic (PLEG): container finished" podID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerID="71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369" exitCode=0 Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.654791 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerDied","Data":"71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369"} Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.655209 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerStarted","Data":"473e6a14144087111cf38815998ee896471bee28bd13997500d9d54ec4fffc77"} Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.657440 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.729042 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.781473 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.821117 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:04:27 crc kubenswrapper[4740]: I0130 17:04:27.867124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:04:28 crc kubenswrapper[4740]: I0130 17:04:28.353510 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:04:28 crc kubenswrapper[4740]: I0130 17:04:28.409553 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:04:28 crc kubenswrapper[4740]: I0130 17:04:28.517061 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/controller/0.log" Jan 30 17:04:28 crc kubenswrapper[4740]: I0130 17:04:28.588024 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.376889 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/frr-metrics/0.log" Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.408299 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/kube-rbac-proxy-frr/0.log" Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.409156 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/kube-rbac-proxy/0.log" Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.720366 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerStarted","Data":"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074"} Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.783120 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/reloader/0.log" Jan 30 17:04:29 crc kubenswrapper[4740]: I0130 17:04:29.932118 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kj46g_229e5897-0e63-4b65-8142-77d97ef63ca3/frr-k8s-webhook-server/0.log" Jan 30 17:04:30 crc kubenswrapper[4740]: I0130 17:04:30.166853 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-679cd9954d-7f5xw_de72a0d2-8f4e-442e-99e0-8179782f810b/manager/0.log" Jan 30 17:04:30 crc kubenswrapper[4740]: I0130 17:04:30.422092 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f44447989-gnfds_7e49666f-5b34-430c-bfa4-c85208433cda/webhook-server/0.log" Jan 30 17:04:30 crc kubenswrapper[4740]: I0130 17:04:30.714527 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbsrp_d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9/kube-rbac-proxy/0.log" Jan 30 17:04:30 crc kubenswrapper[4740]: I0130 17:04:30.981174 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/frr/0.log" Jan 30 17:04:31 crc kubenswrapper[4740]: I0130 17:04:31.963604 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbsrp_d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9/speaker/0.log" Jan 30 17:04:38 crc kubenswrapper[4740]: I0130 17:04:38.836124 4740 generic.go:334] "Generic (PLEG): container finished" podID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerID="2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074" exitCode=0 Jan 30 17:04:38 crc kubenswrapper[4740]: I0130 17:04:38.836203 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerDied","Data":"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074"} Jan 30 17:04:40 crc kubenswrapper[4740]: I0130 17:04:40.867807 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerStarted","Data":"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1"} Jan 30 17:04:40 crc kubenswrapper[4740]: I0130 17:04:40.902017 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lx474" podStartSLOduration=3.931122514 podStartE2EDuration="15.901987369s" podCreationTimestamp="2026-01-30 17:04:25 +0000 UTC" firstStartedPulling="2026-01-30 17:04:27.657179494 +0000 UTC m=+4116.294242093" lastFinishedPulling="2026-01-30 17:04:39.628044349 +0000 UTC m=+4128.265106948" observedRunningTime="2026-01-30 17:04:40.888062721 +0000 UTC m=+4129.525125320" watchObservedRunningTime="2026-01-30 17:04:40.901987369 +0000 UTC m=+4129.539049968" Jan 30 17:04:45 crc kubenswrapper[4740]: I0130 17:04:45.920319 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:45 crc kubenswrapper[4740]: I0130 17:04:45.920965 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:04:46 crc kubenswrapper[4740]: I0130 17:04:46.982060 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lx474" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" probeResult="failure" output=< Jan 30 17:04:46 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:04:46 crc kubenswrapper[4740]: > Jan 30 17:04:53 crc kubenswrapper[4740]: I0130 17:04:53.406982 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.008101 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.052542 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.086683 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.380317 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.401023 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.501651 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/extract/0.log" Jan 30 17:04:54 crc kubenswrapper[4740]: I0130 17:04:54.790156 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.008186 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.039499 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.086430 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.334980 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.361754 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/extract/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.391470 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:04:55 crc kubenswrapper[4740]: I0130 17:04:55.799149 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.017987 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.055951 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.093105 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.304902 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.394025 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.394978 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/extract/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.678096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:04:56 crc kubenswrapper[4740]: I0130 17:04:56.980093 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lx474" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" probeResult="failure" output=< Jan 30 17:04:56 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:04:56 crc kubenswrapper[4740]: > Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.047904 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.116165 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.116409 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.421616 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.450610 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.530028 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/extract/0.log" Jan 30 17:04:57 crc kubenswrapper[4740]: I0130 17:04:57.880854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.378561 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.383237 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.432052 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.710839 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.727001 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:04:58 crc kubenswrapper[4740]: I0130 17:04:58.826789 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.095723 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.098690 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.126664 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.484686 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.538367 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/registry-server/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.565933 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.624368 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.627488 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.680851 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.734974 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zbxl\" (UniqueName: \"kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.735178 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.735326 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.839092 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.839266 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.839304 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zbxl\" (UniqueName: \"kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.840404 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.840730 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.877500 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zbxl\" (UniqueName: \"kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl\") pod \"certified-operators-m6kkt\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.983845 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gqx29_74ccccc6-dc66-4346-8d92-b38103ce5d69/marketplace-operator/0.log" Jan 30 17:04:59 crc kubenswrapper[4740]: I0130 17:04:59.984893 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:00 crc kubenswrapper[4740]: I0130 17:05:00.144862 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:05:00 crc kubenswrapper[4740]: I0130 17:05:00.480772 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/registry-server/0.log" Jan 30 17:05:00 crc kubenswrapper[4740]: I0130 17:05:00.654849 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.125788 4740 generic.go:334] "Generic (PLEG): container finished" podID="362e5570-9401-4c85-97da-e6e245dc8601" containerID="5a3d6592636e75a8de6f4a861840de0d900d82a072d5ca062ad38e32a39fa5e8" exitCode=0 Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.125857 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerDied","Data":"5a3d6592636e75a8de6f4a861840de0d900d82a072d5ca062ad38e32a39fa5e8"} Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.125898 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerStarted","Data":"5981e61b92d54ed17ec7080c73f41f9b2925de93c3ae3e7c2b21f365500f04ee"} Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.149216 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.212787 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.232902 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: E0130 17:05:01.244548 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod362e5570_9401_4c85_97da_e6e245dc8601.slice/crio-conmon-5a3d6592636e75a8de6f4a861840de0d900d82a072d5ca062ad38e32a39fa5e8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.443083 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.469136 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.480820 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-utilities/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.701052 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/registry-server/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.833112 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-content/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.847542 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-utilities/0.log" Jan 30 17:05:01 crc kubenswrapper[4740]: I0130 17:05:01.905305 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-content/0.log" Jan 30 17:05:02 crc kubenswrapper[4740]: I0130 17:05:02.448451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-utilities/0.log" Jan 30 17:05:02 crc kubenswrapper[4740]: I0130 17:05:02.547849 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:05:02 crc kubenswrapper[4740]: I0130 17:05:02.564669 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/registry-server/0.log" Jan 30 17:05:02 crc kubenswrapper[4740]: I0130 17:05:02.690521 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lx474_0de43b11-619b-46ea-89e1-cfb16b7980dc/extract-content/0.log" Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.160674 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerStarted","Data":"4f8df25721db9c3733a8b7a8e502daee6aa11d5e64117793dfabd029472e2f4b"} Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.583627 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.598324 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.603811 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.799654 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:05:03 crc kubenswrapper[4740]: I0130 17:05:03.974250 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:05:04 crc kubenswrapper[4740]: I0130 17:05:04.564463 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/registry-server/0.log" Jan 30 17:05:06 crc kubenswrapper[4740]: I0130 17:05:06.222205 4740 generic.go:334] "Generic (PLEG): container finished" podID="362e5570-9401-4c85-97da-e6e245dc8601" containerID="4f8df25721db9c3733a8b7a8e502daee6aa11d5e64117793dfabd029472e2f4b" exitCode=0 Jan 30 17:05:06 crc kubenswrapper[4740]: I0130 17:05:06.222294 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerDied","Data":"4f8df25721db9c3733a8b7a8e502daee6aa11d5e64117793dfabd029472e2f4b"} Jan 30 17:05:06 crc kubenswrapper[4740]: I0130 17:05:06.977555 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lx474" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" probeResult="failure" output=< Jan 30 17:05:06 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:05:06 crc kubenswrapper[4740]: > Jan 30 17:05:07 crc kubenswrapper[4740]: I0130 17:05:07.240761 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerStarted","Data":"f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be"} Jan 30 17:05:07 crc kubenswrapper[4740]: I0130 17:05:07.270424 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6kkt" podStartSLOduration=2.745899756 podStartE2EDuration="8.270395722s" podCreationTimestamp="2026-01-30 17:04:59 +0000 UTC" firstStartedPulling="2026-01-30 17:05:01.129834589 +0000 UTC m=+4149.766897188" lastFinishedPulling="2026-01-30 17:05:06.654330555 +0000 UTC m=+4155.291393154" observedRunningTime="2026-01-30 17:05:07.265619933 +0000 UTC m=+4155.902682532" watchObservedRunningTime="2026-01-30 17:05:07.270395722 +0000 UTC m=+4155.907458331" Jan 30 17:05:09 crc kubenswrapper[4740]: I0130 17:05:09.986759 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:09 crc kubenswrapper[4740]: I0130 17:05:09.987449 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:10 crc kubenswrapper[4740]: I0130 17:05:10.092941 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:15 crc kubenswrapper[4740]: I0130 17:05:15.991145 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:05:16 crc kubenswrapper[4740]: I0130 17:05:16.086502 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:05:16 crc kubenswrapper[4740]: I0130 17:05:16.236197 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:05:17 crc kubenswrapper[4740]: I0130 17:05:17.352568 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lx474" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" containerID="cri-o://85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1" gracePeriod=2 Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.247040 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.359734 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content\") pod \"0de43b11-619b-46ea-89e1-cfb16b7980dc\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.359922 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities\") pod \"0de43b11-619b-46ea-89e1-cfb16b7980dc\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.360084 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bsjk\" (UniqueName: \"kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk\") pod \"0de43b11-619b-46ea-89e1-cfb16b7980dc\" (UID: \"0de43b11-619b-46ea-89e1-cfb16b7980dc\") " Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.360915 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities" (OuterVolumeSpecName: "utilities") pod "0de43b11-619b-46ea-89e1-cfb16b7980dc" (UID: "0de43b11-619b-46ea-89e1-cfb16b7980dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.365472 4740 generic.go:334] "Generic (PLEG): container finished" podID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerID="85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1" exitCode=0 Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.365525 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerDied","Data":"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1"} Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.365558 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lx474" event={"ID":"0de43b11-619b-46ea-89e1-cfb16b7980dc","Type":"ContainerDied","Data":"473e6a14144087111cf38815998ee896471bee28bd13997500d9d54ec4fffc77"} Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.365578 4740 scope.go:117] "RemoveContainer" containerID="85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.365740 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lx474" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.382403 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk" (OuterVolumeSpecName: "kube-api-access-8bsjk") pod "0de43b11-619b-46ea-89e1-cfb16b7980dc" (UID: "0de43b11-619b-46ea-89e1-cfb16b7980dc"). InnerVolumeSpecName "kube-api-access-8bsjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.428733 4740 scope.go:117] "RemoveContainer" containerID="2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.461606 4740 scope.go:117] "RemoveContainer" containerID="71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.463700 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bsjk\" (UniqueName: \"kubernetes.io/projected/0de43b11-619b-46ea-89e1-cfb16b7980dc-kube-api-access-8bsjk\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.463726 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.517816 4740 scope.go:117] "RemoveContainer" containerID="85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.518380 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0de43b11-619b-46ea-89e1-cfb16b7980dc" (UID: "0de43b11-619b-46ea-89e1-cfb16b7980dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:18 crc kubenswrapper[4740]: E0130 17:05:18.518596 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1\": container with ID starting with 85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1 not found: ID does not exist" containerID="85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.518634 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1"} err="failed to get container status \"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1\": rpc error: code = NotFound desc = could not find container \"85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1\": container with ID starting with 85d050dcc267a39eb87e697985125850f8026517da3b965f1e2f38b70dc43dd1 not found: ID does not exist" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.518694 4740 scope.go:117] "RemoveContainer" containerID="2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074" Jan 30 17:05:18 crc kubenswrapper[4740]: E0130 17:05:18.519075 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074\": container with ID starting with 2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074 not found: ID does not exist" containerID="2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.519098 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074"} err="failed to get container status \"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074\": rpc error: code = NotFound desc = could not find container \"2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074\": container with ID starting with 2aa8e4e1d7057a0525efbc518608e026edbf8adf6df839eb2bc8e454e5097074 not found: ID does not exist" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.519114 4740 scope.go:117] "RemoveContainer" containerID="71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369" Jan 30 17:05:18 crc kubenswrapper[4740]: E0130 17:05:18.519450 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369\": container with ID starting with 71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369 not found: ID does not exist" containerID="71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.519479 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369"} err="failed to get container status \"71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369\": rpc error: code = NotFound desc = could not find container \"71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369\": container with ID starting with 71ce25628c59505a0795048a7b6bfdd272130b856c849ed02a9a4c35fcbba369 not found: ID does not exist" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.566063 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de43b11-619b-46ea-89e1-cfb16b7980dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.707934 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:05:18 crc kubenswrapper[4740]: I0130 17:05:18.718121 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lx474"] Jan 30 17:05:19 crc kubenswrapper[4740]: I0130 17:05:19.364221 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" path="/var/lib/kubelet/pods/0de43b11-619b-46ea-89e1-cfb16b7980dc/volumes" Jan 30 17:05:20 crc kubenswrapper[4740]: I0130 17:05:20.045300 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:21 crc kubenswrapper[4740]: I0130 17:05:21.636611 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:05:21 crc kubenswrapper[4740]: I0130 17:05:21.637530 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6kkt" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="registry-server" containerID="cri-o://f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be" gracePeriod=2 Jan 30 17:05:21 crc kubenswrapper[4740]: E0130 17:05:21.905217 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod362e5570_9401_4c85_97da_e6e245dc8601.slice/crio-f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod362e5570_9401_4c85_97da_e6e245dc8601.slice/crio-conmon-f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.421228 4740 generic.go:334] "Generic (PLEG): container finished" podID="362e5570-9401-4c85-97da-e6e245dc8601" containerID="f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be" exitCode=0 Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.421305 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerDied","Data":"f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be"} Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.623765 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.770397 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities\") pod \"362e5570-9401-4c85-97da-e6e245dc8601\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.770499 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content\") pod \"362e5570-9401-4c85-97da-e6e245dc8601\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.770553 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zbxl\" (UniqueName: \"kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl\") pod \"362e5570-9401-4c85-97da-e6e245dc8601\" (UID: \"362e5570-9401-4c85-97da-e6e245dc8601\") " Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.771338 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities" (OuterVolumeSpecName: "utilities") pod "362e5570-9401-4c85-97da-e6e245dc8601" (UID: "362e5570-9401-4c85-97da-e6e245dc8601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.777971 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl" (OuterVolumeSpecName: "kube-api-access-2zbxl") pod "362e5570-9401-4c85-97da-e6e245dc8601" (UID: "362e5570-9401-4c85-97da-e6e245dc8601"). InnerVolumeSpecName "kube-api-access-2zbxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.856520 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "362e5570-9401-4c85-97da-e6e245dc8601" (UID: "362e5570-9401-4c85-97da-e6e245dc8601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.873859 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.873909 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362e5570-9401-4c85-97da-e6e245dc8601-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:22 crc kubenswrapper[4740]: I0130 17:05:22.873926 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zbxl\" (UniqueName: \"kubernetes.io/projected/362e5570-9401-4c85-97da-e6e245dc8601-kube-api-access-2zbxl\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.437107 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6kkt" event={"ID":"362e5570-9401-4c85-97da-e6e245dc8601","Type":"ContainerDied","Data":"5981e61b92d54ed17ec7080c73f41f9b2925de93c3ae3e7c2b21f365500f04ee"} Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.437518 4740 scope.go:117] "RemoveContainer" containerID="f0f180bb041d128db05a767a54c107256ebd27ee625704450622581c975279be" Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.437473 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6kkt" Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.475420 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.483628 4740 scope.go:117] "RemoveContainer" containerID="4f8df25721db9c3733a8b7a8e502daee6aa11d5e64117793dfabd029472e2f4b" Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.497442 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6kkt"] Jan 30 17:05:23 crc kubenswrapper[4740]: I0130 17:05:23.508905 4740 scope.go:117] "RemoveContainer" containerID="5a3d6592636e75a8de6f4a861840de0d900d82a072d5ca062ad38e32a39fa5e8" Jan 30 17:05:24 crc kubenswrapper[4740]: I0130 17:05:24.454987 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:05:24 crc kubenswrapper[4740]: I0130 17:05:24.455334 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:05:24 crc kubenswrapper[4740]: I0130 17:05:24.948671 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_e70968d1-7497-4724-9c80-cf5abdf288ea/prometheus-operator-admission-webhook/0.log" Jan 30 17:05:24 crc kubenswrapper[4740]: I0130 17:05:24.973832 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fl52v_7fed297b-1b60-4fa1-81ad-f7aff661624d/prometheus-operator/0.log" Jan 30 17:05:25 crc kubenswrapper[4740]: I0130 17:05:25.113942 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_27f815e6-2917-46af-8a6d-4bcd66c35042/prometheus-operator-admission-webhook/0.log" Jan 30 17:05:25 crc kubenswrapper[4740]: I0130 17:05:25.301793 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r2zbm_522756c7-f451-4879-b2b3-2d19b80cb751/perses-operator/0.log" Jan 30 17:05:25 crc kubenswrapper[4740]: I0130 17:05:25.312040 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pdgvg_6a0acde2-70b4-4622-a609-290cbc5f253f/operator/0.log" Jan 30 17:05:25 crc kubenswrapper[4740]: I0130 17:05:25.349142 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362e5570-9401-4c85-97da-e6e245dc8601" path="/var/lib/kubelet/pods/362e5570-9401-4c85-97da-e6e245dc8601/volumes" Jan 30 17:05:43 crc kubenswrapper[4740]: I0130 17:05:43.985127 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/kube-rbac-proxy/0.log" Jan 30 17:05:44 crc kubenswrapper[4740]: I0130 17:05:44.065419 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/manager/0.log" Jan 30 17:05:54 crc kubenswrapper[4740]: I0130 17:05:54.455337 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:05:54 crc kubenswrapper[4740]: I0130 17:05:54.455977 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:06:24 crc kubenswrapper[4740]: I0130 17:06:24.455227 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:06:24 crc kubenswrapper[4740]: I0130 17:06:24.455919 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:06:24 crc kubenswrapper[4740]: I0130 17:06:24.455988 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 17:06:24 crc kubenswrapper[4740]: I0130 17:06:24.457045 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:06:24 crc kubenswrapper[4740]: I0130 17:06:24.457121 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95" gracePeriod=600 Jan 30 17:06:25 crc kubenswrapper[4740]: I0130 17:06:25.192246 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95" exitCode=0 Jan 30 17:06:25 crc kubenswrapper[4740]: I0130 17:06:25.192318 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95"} Jan 30 17:06:25 crc kubenswrapper[4740]: I0130 17:06:25.192848 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f"} Jan 30 17:06:25 crc kubenswrapper[4740]: I0130 17:06:25.192885 4740 scope.go:117] "RemoveContainer" containerID="edf79d1d07feee4319562a7943c4b77cbb8e3f660ca7a2bb2e0aea8db6a6ec3a" Jan 30 17:06:38 crc kubenswrapper[4740]: I0130 17:06:38.564185 4740 scope.go:117] "RemoveContainer" containerID="7b4768d37f9b98c3d20cad0fa19329f7a71fc421520771a427709e915e67316a" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.711100 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712783 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="extract-content" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712808 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="extract-content" Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712858 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="extract-utilities" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712866 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="extract-utilities" Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712877 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="extract-content" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712884 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="extract-content" Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712893 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712900 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712916 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="extract-utilities" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712925 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="extract-utilities" Jan 30 17:06:55 crc kubenswrapper[4740]: E0130 17:06:55.712939 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.712946 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.713230 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de43b11-619b-46ea-89e1-cfb16b7980dc" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.713260 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="362e5570-9401-4c85-97da-e6e245dc8601" containerName="registry-server" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.715829 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.727179 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.873261 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.873511 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpcs\" (UniqueName: \"kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.873886 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.975956 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.976112 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.976216 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpcs\" (UniqueName: \"kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.976739 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:55 crc kubenswrapper[4740]: I0130 17:06:55.977061 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:56 crc kubenswrapper[4740]: I0130 17:06:56.375368 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpcs\" (UniqueName: \"kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs\") pod \"community-operators-sd82x\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:56 crc kubenswrapper[4740]: I0130 17:06:56.378516 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:06:56 crc kubenswrapper[4740]: I0130 17:06:56.930590 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:06:57 crc kubenswrapper[4740]: I0130 17:06:57.578443 4740 generic.go:334] "Generic (PLEG): container finished" podID="68574236-7419-43cd-a566-cda066f1fc88" containerID="926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886" exitCode=0 Jan 30 17:06:57 crc kubenswrapper[4740]: I0130 17:06:57.578747 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerDied","Data":"926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886"} Jan 30 17:06:57 crc kubenswrapper[4740]: I0130 17:06:57.578787 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerStarted","Data":"625737ea448464f30241c04a4413c110e1851da0a4321ab4ffcdeab5ec7085d0"} Jan 30 17:06:59 crc kubenswrapper[4740]: I0130 17:06:59.602407 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerStarted","Data":"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456"} Jan 30 17:07:01 crc kubenswrapper[4740]: I0130 17:07:01.625714 4740 generic.go:334] "Generic (PLEG): container finished" podID="68574236-7419-43cd-a566-cda066f1fc88" containerID="fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456" exitCode=0 Jan 30 17:07:01 crc kubenswrapper[4740]: I0130 17:07:01.625810 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerDied","Data":"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456"} Jan 30 17:07:02 crc kubenswrapper[4740]: I0130 17:07:02.639326 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerStarted","Data":"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c"} Jan 30 17:07:02 crc kubenswrapper[4740]: I0130 17:07:02.667065 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sd82x" podStartSLOduration=3.086210906 podStartE2EDuration="7.667034232s" podCreationTimestamp="2026-01-30 17:06:55 +0000 UTC" firstStartedPulling="2026-01-30 17:06:57.580774743 +0000 UTC m=+4266.217837342" lastFinishedPulling="2026-01-30 17:07:02.161598059 +0000 UTC m=+4270.798660668" observedRunningTime="2026-01-30 17:07:02.661022762 +0000 UTC m=+4271.298085361" watchObservedRunningTime="2026-01-30 17:07:02.667034232 +0000 UTC m=+4271.304096851" Jan 30 17:07:06 crc kubenswrapper[4740]: I0130 17:07:06.380107 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:06 crc kubenswrapper[4740]: I0130 17:07:06.380532 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:06 crc kubenswrapper[4740]: I0130 17:07:06.447128 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:16 crc kubenswrapper[4740]: I0130 17:07:16.446663 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:16 crc kubenswrapper[4740]: I0130 17:07:16.574678 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:07:16 crc kubenswrapper[4740]: I0130 17:07:16.781751 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sd82x" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="registry-server" containerID="cri-o://c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c" gracePeriod=2 Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.543784 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.640407 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content\") pod \"68574236-7419-43cd-a566-cda066f1fc88\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.640752 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities\") pod \"68574236-7419-43cd-a566-cda066f1fc88\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.640884 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmpcs\" (UniqueName: \"kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs\") pod \"68574236-7419-43cd-a566-cda066f1fc88\" (UID: \"68574236-7419-43cd-a566-cda066f1fc88\") " Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.643706 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities" (OuterVolumeSpecName: "utilities") pod "68574236-7419-43cd-a566-cda066f1fc88" (UID: "68574236-7419-43cd-a566-cda066f1fc88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.648933 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs" (OuterVolumeSpecName: "kube-api-access-wmpcs") pod "68574236-7419-43cd-a566-cda066f1fc88" (UID: "68574236-7419-43cd-a566-cda066f1fc88"). InnerVolumeSpecName "kube-api-access-wmpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.719315 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68574236-7419-43cd-a566-cda066f1fc88" (UID: "68574236-7419-43cd-a566-cda066f1fc88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.743923 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.743960 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68574236-7419-43cd-a566-cda066f1fc88-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.743971 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmpcs\" (UniqueName: \"kubernetes.io/projected/68574236-7419-43cd-a566-cda066f1fc88-kube-api-access-wmpcs\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.794723 4740 generic.go:334] "Generic (PLEG): container finished" podID="68574236-7419-43cd-a566-cda066f1fc88" containerID="c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c" exitCode=0 Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.794789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerDied","Data":"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c"} Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.794809 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd82x" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.794836 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd82x" event={"ID":"68574236-7419-43cd-a566-cda066f1fc88","Type":"ContainerDied","Data":"625737ea448464f30241c04a4413c110e1851da0a4321ab4ffcdeab5ec7085d0"} Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.794864 4740 scope.go:117] "RemoveContainer" containerID="c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.819576 4740 scope.go:117] "RemoveContainer" containerID="fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.845780 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.852368 4740 scope.go:117] "RemoveContainer" containerID="926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.860779 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sd82x"] Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.898427 4740 scope.go:117] "RemoveContainer" containerID="c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c" Jan 30 17:07:17 crc kubenswrapper[4740]: E0130 17:07:17.899038 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c\": container with ID starting with c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c not found: ID does not exist" containerID="c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.899098 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c"} err="failed to get container status \"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c\": rpc error: code = NotFound desc = could not find container \"c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c\": container with ID starting with c2de5fb3c42a78b1f36e31524bd60d94e36640913d124d011dee797e795c892c not found: ID does not exist" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.899138 4740 scope.go:117] "RemoveContainer" containerID="fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456" Jan 30 17:07:17 crc kubenswrapper[4740]: E0130 17:07:17.899565 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456\": container with ID starting with fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456 not found: ID does not exist" containerID="fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.899600 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456"} err="failed to get container status \"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456\": rpc error: code = NotFound desc = could not find container \"fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456\": container with ID starting with fa191b485d463ad97395137fedab44fc0e57d6738f87f2c00d404f505dd4d456 not found: ID does not exist" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.899617 4740 scope.go:117] "RemoveContainer" containerID="926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886" Jan 30 17:07:17 crc kubenswrapper[4740]: E0130 17:07:17.899922 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886\": container with ID starting with 926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886 not found: ID does not exist" containerID="926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886" Jan 30 17:07:17 crc kubenswrapper[4740]: I0130 17:07:17.899979 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886"} err="failed to get container status \"926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886\": rpc error: code = NotFound desc = could not find container \"926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886\": container with ID starting with 926a2380ab00156c2cbdac303a968aea45a084126248f9a809ad6707e7cde886 not found: ID does not exist" Jan 30 17:07:19 crc kubenswrapper[4740]: I0130 17:07:19.358506 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68574236-7419-43cd-a566-cda066f1fc88" path="/var/lib/kubelet/pods/68574236-7419-43cd-a566-cda066f1fc88/volumes" Jan 30 17:08:06 crc kubenswrapper[4740]: I0130 17:08:06.822899 4740 generic.go:334] "Generic (PLEG): container finished" podID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerID="9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70" exitCode=0 Jan 30 17:08:06 crc kubenswrapper[4740]: I0130 17:08:06.823028 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" event={"ID":"b3b057b9-5afa-4aae-80a0-2963a1a54b2a","Type":"ContainerDied","Data":"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70"} Jan 30 17:08:06 crc kubenswrapper[4740]: I0130 17:08:06.824589 4740 scope.go:117] "RemoveContainer" containerID="9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70" Jan 30 17:08:06 crc kubenswrapper[4740]: I0130 17:08:06.945921 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk8qw_must-gather-7pfzw_b3b057b9-5afa-4aae-80a0-2963a1a54b2a/gather/0.log" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.144949 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk8qw/must-gather-7pfzw"] Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.147190 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="copy" containerID="cri-o://57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0" gracePeriod=2 Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.207268 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk8qw/must-gather-7pfzw"] Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.911911 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk8qw_must-gather-7pfzw_b3b057b9-5afa-4aae-80a0-2963a1a54b2a/copy/0.log" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.913033 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.935644 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk8qw_must-gather-7pfzw_b3b057b9-5afa-4aae-80a0-2963a1a54b2a/copy/0.log" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.936392 4740 generic.go:334] "Generic (PLEG): container finished" podID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerID="57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0" exitCode=143 Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.936477 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk8qw/must-gather-7pfzw" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.936482 4740 scope.go:117] "RemoveContainer" containerID="57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0" Jan 30 17:08:15 crc kubenswrapper[4740]: I0130 17:08:15.959497 4740 scope.go:117] "RemoveContainer" containerID="9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.000243 4740 scope.go:117] "RemoveContainer" containerID="57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0" Jan 30 17:08:16 crc kubenswrapper[4740]: E0130 17:08:16.001233 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0\": container with ID starting with 57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0 not found: ID does not exist" containerID="57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.001290 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0"} err="failed to get container status \"57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0\": rpc error: code = NotFound desc = could not find container \"57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0\": container with ID starting with 57d4af1f593d023476a62c99068545b8836f6199bb94e55a8a0ef1b65acd57a0 not found: ID does not exist" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.001334 4740 scope.go:117] "RemoveContainer" containerID="9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70" Jan 30 17:08:16 crc kubenswrapper[4740]: E0130 17:08:16.001953 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70\": container with ID starting with 9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70 not found: ID does not exist" containerID="9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.002016 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70"} err="failed to get container status \"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70\": rpc error: code = NotFound desc = could not find container \"9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70\": container with ID starting with 9686126234bf2783a52b0b400673ceea0d0d60a62050c9bbaad80d7a23a64c70 not found: ID does not exist" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.022534 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25drt\" (UniqueName: \"kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt\") pod \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.022661 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output\") pod \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\" (UID: \"b3b057b9-5afa-4aae-80a0-2963a1a54b2a\") " Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.028041 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt" (OuterVolumeSpecName: "kube-api-access-25drt") pod "b3b057b9-5afa-4aae-80a0-2963a1a54b2a" (UID: "b3b057b9-5afa-4aae-80a0-2963a1a54b2a"). InnerVolumeSpecName "kube-api-access-25drt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.125424 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25drt\" (UniqueName: \"kubernetes.io/projected/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-kube-api-access-25drt\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.246868 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b3b057b9-5afa-4aae-80a0-2963a1a54b2a" (UID: "b3b057b9-5afa-4aae-80a0-2963a1a54b2a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:16 crc kubenswrapper[4740]: I0130 17:08:16.331595 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b3b057b9-5afa-4aae-80a0-2963a1a54b2a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:17 crc kubenswrapper[4740]: I0130 17:08:17.372154 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" path="/var/lib/kubelet/pods/b3b057b9-5afa-4aae-80a0-2963a1a54b2a/volumes" Jan 30 17:08:24 crc kubenswrapper[4740]: I0130 17:08:24.455197 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:24 crc kubenswrapper[4740]: I0130 17:08:24.455781 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:54 crc kubenswrapper[4740]: I0130 17:08:54.455481 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:54 crc kubenswrapper[4740]: I0130 17:08:54.456193 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.455250 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.457016 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.457186 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.458384 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.458565 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" gracePeriod=600 Jan 30 17:09:24 crc kubenswrapper[4740]: E0130 17:09:24.585001 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.663774 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" exitCode=0 Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.663877 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f"} Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.664182 4740 scope.go:117] "RemoveContainer" containerID="e2b85861d1b3ff70c3fc37cc798d3a7a87fc63bbc5ab5e84863866a94fe12e95" Jan 30 17:09:24 crc kubenswrapper[4740]: I0130 17:09:24.665404 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:09:24 crc kubenswrapper[4740]: E0130 17:09:24.665855 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:09:36 crc kubenswrapper[4740]: I0130 17:09:36.336000 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:09:36 crc kubenswrapper[4740]: E0130 17:09:36.336975 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:09:51 crc kubenswrapper[4740]: I0130 17:09:51.336424 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:09:51 crc kubenswrapper[4740]: E0130 17:09:51.337693 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:10:06 crc kubenswrapper[4740]: I0130 17:10:06.336462 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:10:06 crc kubenswrapper[4740]: E0130 17:10:06.338792 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:10:17 crc kubenswrapper[4740]: I0130 17:10:17.338668 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:10:17 crc kubenswrapper[4740]: E0130 17:10:17.339500 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:10:31 crc kubenswrapper[4740]: I0130 17:10:31.335925 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:10:31 crc kubenswrapper[4740]: E0130 17:10:31.336851 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:10:46 crc kubenswrapper[4740]: I0130 17:10:46.336045 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:10:46 crc kubenswrapper[4740]: E0130 17:10:46.336864 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:10:59 crc kubenswrapper[4740]: I0130 17:10:59.336246 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:10:59 crc kubenswrapper[4740]: E0130 17:10:59.337180 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.647201 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vcsws/must-gather-sffq6"] Jan 30 17:11:11 crc kubenswrapper[4740]: E0130 17:11:11.648476 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="registry-server" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648496 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="registry-server" Jan 30 17:11:11 crc kubenswrapper[4740]: E0130 17:11:11.648519 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="gather" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648529 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="gather" Jan 30 17:11:11 crc kubenswrapper[4740]: E0130 17:11:11.648548 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="extract-utilities" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648557 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="extract-utilities" Jan 30 17:11:11 crc kubenswrapper[4740]: E0130 17:11:11.648575 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="copy" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648582 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="copy" Jan 30 17:11:11 crc kubenswrapper[4740]: E0130 17:11:11.648597 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="extract-content" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648605 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="extract-content" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648913 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="copy" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648949 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b057b9-5afa-4aae-80a0-2963a1a54b2a" containerName="gather" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.648964 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="68574236-7419-43cd-a566-cda066f1fc88" containerName="registry-server" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.650657 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.667989 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vcsws"/"openshift-service-ca.crt" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.669797 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vcsws"/"default-dockercfg-ghxxg" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.678486 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vcsws/must-gather-sffq6"] Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.684664 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vcsws"/"kube-root-ca.crt" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.779746 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.779848 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkdc\" (UniqueName: \"kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.883028 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.883129 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkdc\" (UniqueName: \"kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.884225 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.915554 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkdc\" (UniqueName: \"kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc\") pod \"must-gather-sffq6\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:11 crc kubenswrapper[4740]: I0130 17:11:11.981621 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:11:12 crc kubenswrapper[4740]: I0130 17:11:12.715959 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vcsws/must-gather-sffq6"] Jan 30 17:11:13 crc kubenswrapper[4740]: I0130 17:11:13.173929 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/must-gather-sffq6" event={"ID":"50b8cfe1-a001-4ebe-8756-6963c0c9c145","Type":"ContainerStarted","Data":"9394cbff9164e4bd600bb5d1c4308d3d4b2367c2b3dd82d6baee55c532164c99"} Jan 30 17:11:13 crc kubenswrapper[4740]: I0130 17:11:13.174607 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/must-gather-sffq6" event={"ID":"50b8cfe1-a001-4ebe-8756-6963c0c9c145","Type":"ContainerStarted","Data":"354012b7232b02256b433ed902adaa7c91152778eb2d326ecdb44d2c31b75c98"} Jan 30 17:11:14 crc kubenswrapper[4740]: I0130 17:11:14.187278 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/must-gather-sffq6" event={"ID":"50b8cfe1-a001-4ebe-8756-6963c0c9c145","Type":"ContainerStarted","Data":"c4a8dafe757f19aa3da49a4d4cc7df531224e2d03e8bf90b0a6c381886423bb2"} Jan 30 17:11:14 crc kubenswrapper[4740]: I0130 17:11:14.212028 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vcsws/must-gather-sffq6" podStartSLOduration=3.211998646 podStartE2EDuration="3.211998646s" podCreationTimestamp="2026-01-30 17:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:11:14.205313028 +0000 UTC m=+4522.842375627" watchObservedRunningTime="2026-01-30 17:11:14.211998646 +0000 UTC m=+4522.849061245" Jan 30 17:11:14 crc kubenswrapper[4740]: I0130 17:11:14.336459 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:11:14 crc kubenswrapper[4740]: E0130 17:11:14.336882 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.442144 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vcsws/crc-debug-2pbgb"] Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.445935 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.532859 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt57j\" (UniqueName: \"kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.533024 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.635339 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.635585 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.636021 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt57j\" (UniqueName: \"kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.663910 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt57j\" (UniqueName: \"kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j\") pod \"crc-debug-2pbgb\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:19 crc kubenswrapper[4740]: I0130 17:11:19.785648 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:11:20 crc kubenswrapper[4740]: I0130 17:11:20.262638 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" event={"ID":"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de","Type":"ContainerStarted","Data":"89159ddfa27a6f2ae6fe7a7202130f61e55906d0a19d4707f905ef3d14e668b3"} Jan 30 17:11:20 crc kubenswrapper[4740]: I0130 17:11:20.262967 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" event={"ID":"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de","Type":"ContainerStarted","Data":"6c5bea16bf22e5a8858c7de7741e0aa567da9f1c48225b6160d36dab902f4642"} Jan 30 17:11:20 crc kubenswrapper[4740]: I0130 17:11:20.295609 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" podStartSLOduration=1.29556736 podStartE2EDuration="1.29556736s" podCreationTimestamp="2026-01-30 17:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:11:20.292759019 +0000 UTC m=+4528.929821618" watchObservedRunningTime="2026-01-30 17:11:20.29556736 +0000 UTC m=+4528.932629959" Jan 30 17:11:29 crc kubenswrapper[4740]: I0130 17:11:29.340433 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:11:29 crc kubenswrapper[4740]: E0130 17:11:29.342225 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:11:44 crc kubenswrapper[4740]: I0130 17:11:44.336114 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:11:44 crc kubenswrapper[4740]: E0130 17:11:44.337304 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:11:55 crc kubenswrapper[4740]: I0130 17:11:55.336548 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:11:55 crc kubenswrapper[4740]: E0130 17:11:55.337478 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:06 crc kubenswrapper[4740]: I0130 17:12:06.335899 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:12:06 crc kubenswrapper[4740]: E0130 17:12:06.336855 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:19 crc kubenswrapper[4740]: I0130 17:12:19.342809 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:12:19 crc kubenswrapper[4740]: E0130 17:12:19.343845 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:27 crc kubenswrapper[4740]: I0130 17:12:27.130205 4740 generic.go:334] "Generic (PLEG): container finished" podID="0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" containerID="89159ddfa27a6f2ae6fe7a7202130f61e55906d0a19d4707f905ef3d14e668b3" exitCode=0 Jan 30 17:12:27 crc kubenswrapper[4740]: I0130 17:12:27.130842 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" event={"ID":"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de","Type":"ContainerDied","Data":"89159ddfa27a6f2ae6fe7a7202130f61e55906d0a19d4707f905ef3d14e668b3"} Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.297072 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.334148 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-2pbgb"] Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.343149 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-2pbgb"] Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.472698 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt57j\" (UniqueName: \"kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j\") pod \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.473151 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host\") pod \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\" (UID: \"0b14f6c1-e5f4-4d59-b3ff-aa97655b76de\") " Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.473259 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host" (OuterVolumeSpecName: "host") pod "0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" (UID: "0b14f6c1-e5f4-4d59-b3ff-aa97655b76de"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.473926 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.483908 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j" (OuterVolumeSpecName: "kube-api-access-dt57j") pod "0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" (UID: "0b14f6c1-e5f4-4d59-b3ff-aa97655b76de"). InnerVolumeSpecName "kube-api-access-dt57j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:28 crc kubenswrapper[4740]: I0130 17:12:28.575984 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt57j\" (UniqueName: \"kubernetes.io/projected/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de-kube-api-access-dt57j\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.153651 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c5bea16bf22e5a8858c7de7741e0aa567da9f1c48225b6160d36dab902f4642" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.153755 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-2pbgb" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.351683 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" path="/var/lib/kubelet/pods/0b14f6c1-e5f4-4d59-b3ff-aa97655b76de/volumes" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.605583 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vcsws/crc-debug-9sst4"] Jan 30 17:12:29 crc kubenswrapper[4740]: E0130 17:12:29.606197 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" containerName="container-00" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.606223 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" containerName="container-00" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.606954 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b14f6c1-e5f4-4d59-b3ff-aa97655b76de" containerName="container-00" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.608022 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.708905 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6q5\" (UniqueName: \"kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.709961 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.812777 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.812975 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.813033 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6q5\" (UniqueName: \"kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.846404 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6q5\" (UniqueName: \"kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5\") pod \"crc-debug-9sst4\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:29 crc kubenswrapper[4740]: I0130 17:12:29.930023 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:30 crc kubenswrapper[4740]: W0130 17:12:30.028942 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e43288d_867d_4181_85d0_6b26dc661588.slice/crio-2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6 WatchSource:0}: Error finding container 2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6: Status 404 returned error can't find the container with id 2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6 Jan 30 17:12:30 crc kubenswrapper[4740]: I0130 17:12:30.183884 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-9sst4" event={"ID":"5e43288d-867d-4181-85d0-6b26dc661588","Type":"ContainerStarted","Data":"2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6"} Jan 30 17:12:31 crc kubenswrapper[4740]: I0130 17:12:31.198453 4740 generic.go:334] "Generic (PLEG): container finished" podID="5e43288d-867d-4181-85d0-6b26dc661588" containerID="584360586e65f77c6b9a292de9b8b1868e27dcf4586c913c8061539610a08d70" exitCode=0 Jan 30 17:12:31 crc kubenswrapper[4740]: I0130 17:12:31.198599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-9sst4" event={"ID":"5e43288d-867d-4181-85d0-6b26dc661588","Type":"ContainerDied","Data":"584360586e65f77c6b9a292de9b8b1868e27dcf4586c913c8061539610a08d70"} Jan 30 17:12:32 crc kubenswrapper[4740]: I0130 17:12:32.336574 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:12:32 crc kubenswrapper[4740]: E0130 17:12:32.337653 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:32 crc kubenswrapper[4740]: I0130 17:12:32.979563 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.102621 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6q5\" (UniqueName: \"kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5\") pod \"5e43288d-867d-4181-85d0-6b26dc661588\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.102891 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host\") pod \"5e43288d-867d-4181-85d0-6b26dc661588\" (UID: \"5e43288d-867d-4181-85d0-6b26dc661588\") " Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.103248 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host" (OuterVolumeSpecName: "host") pod "5e43288d-867d-4181-85d0-6b26dc661588" (UID: "5e43288d-867d-4181-85d0-6b26dc661588"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.111096 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5" (OuterVolumeSpecName: "kube-api-access-pm6q5") pod "5e43288d-867d-4181-85d0-6b26dc661588" (UID: "5e43288d-867d-4181-85d0-6b26dc661588"). InnerVolumeSpecName "kube-api-access-pm6q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.205311 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6q5\" (UniqueName: \"kubernetes.io/projected/5e43288d-867d-4181-85d0-6b26dc661588-kube-api-access-pm6q5\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.205378 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e43288d-867d-4181-85d0-6b26dc661588-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.232446 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-9sst4" event={"ID":"5e43288d-867d-4181-85d0-6b26dc661588","Type":"ContainerDied","Data":"2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6"} Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.232502 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-9sst4" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.232511 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1de699d760884b56c87843a8462118107bf1321beefe290eb9d49ea8ecbff6" Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.683811 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-9sst4"] Jan 30 17:12:33 crc kubenswrapper[4740]: I0130 17:12:33.695979 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-9sst4"] Jan 30 17:12:34 crc kubenswrapper[4740]: I0130 17:12:34.893976 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vcsws/crc-debug-swfh9"] Jan 30 17:12:34 crc kubenswrapper[4740]: E0130 17:12:34.895455 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e43288d-867d-4181-85d0-6b26dc661588" containerName="container-00" Jan 30 17:12:34 crc kubenswrapper[4740]: I0130 17:12:34.895485 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e43288d-867d-4181-85d0-6b26dc661588" containerName="container-00" Jan 30 17:12:34 crc kubenswrapper[4740]: I0130 17:12:34.895774 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e43288d-867d-4181-85d0-6b26dc661588" containerName="container-00" Jan 30 17:12:34 crc kubenswrapper[4740]: I0130 17:12:34.896758 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.048033 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.048574 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8cq\" (UniqueName: \"kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.150303 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8cq\" (UniqueName: \"kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.150490 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.150714 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.188525 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8cq\" (UniqueName: \"kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq\") pod \"crc-debug-swfh9\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.233104 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:35 crc kubenswrapper[4740]: I0130 17:12:35.354037 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e43288d-867d-4181-85d0-6b26dc661588" path="/var/lib/kubelet/pods/5e43288d-867d-4181-85d0-6b26dc661588/volumes" Jan 30 17:12:36 crc kubenswrapper[4740]: I0130 17:12:36.271697 4740 generic.go:334] "Generic (PLEG): container finished" podID="283c6b72-ae45-4825-b7fd-f3544e93efed" containerID="55aab3c494a191bf2706b1ef9867a645e1a9f698b3e8d4fe87b7fe3556939b97" exitCode=0 Jan 30 17:12:36 crc kubenswrapper[4740]: I0130 17:12:36.271780 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-swfh9" event={"ID":"283c6b72-ae45-4825-b7fd-f3544e93efed","Type":"ContainerDied","Data":"55aab3c494a191bf2706b1ef9867a645e1a9f698b3e8d4fe87b7fe3556939b97"} Jan 30 17:12:36 crc kubenswrapper[4740]: I0130 17:12:36.272086 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/crc-debug-swfh9" event={"ID":"283c6b72-ae45-4825-b7fd-f3544e93efed","Type":"ContainerStarted","Data":"00d65eb885b5d214306062671b0bca7879cee9ac99acbf1404abf10f3d79a164"} Jan 30 17:12:36 crc kubenswrapper[4740]: I0130 17:12:36.323558 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-swfh9"] Jan 30 17:12:36 crc kubenswrapper[4740]: I0130 17:12:36.337045 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vcsws/crc-debug-swfh9"] Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.434795 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.504508 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr8cq\" (UniqueName: \"kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq\") pod \"283c6b72-ae45-4825-b7fd-f3544e93efed\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.504712 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host\") pod \"283c6b72-ae45-4825-b7fd-f3544e93efed\" (UID: \"283c6b72-ae45-4825-b7fd-f3544e93efed\") " Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.504847 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host" (OuterVolumeSpecName: "host") pod "283c6b72-ae45-4825-b7fd-f3544e93efed" (UID: "283c6b72-ae45-4825-b7fd-f3544e93efed"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.505399 4740 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/283c6b72-ae45-4825-b7fd-f3544e93efed-host\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.514367 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq" (OuterVolumeSpecName: "kube-api-access-xr8cq") pod "283c6b72-ae45-4825-b7fd-f3544e93efed" (UID: "283c6b72-ae45-4825-b7fd-f3544e93efed"). InnerVolumeSpecName "kube-api-access-xr8cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4740]: I0130 17:12:37.607385 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr8cq\" (UniqueName: \"kubernetes.io/projected/283c6b72-ae45-4825-b7fd-f3544e93efed-kube-api-access-xr8cq\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:38 crc kubenswrapper[4740]: I0130 17:12:38.305274 4740 scope.go:117] "RemoveContainer" containerID="55aab3c494a191bf2706b1ef9867a645e1a9f698b3e8d4fe87b7fe3556939b97" Jan 30 17:12:38 crc kubenswrapper[4740]: I0130 17:12:38.305924 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/crc-debug-swfh9" Jan 30 17:12:39 crc kubenswrapper[4740]: I0130 17:12:39.348525 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283c6b72-ae45-4825-b7fd-f3544e93efed" path="/var/lib/kubelet/pods/283c6b72-ae45-4825-b7fd-f3544e93efed/volumes" Jan 30 17:12:44 crc kubenswrapper[4740]: I0130 17:12:44.336341 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:12:44 crc kubenswrapper[4740]: E0130 17:12:44.338987 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.677233 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:12:51 crc kubenswrapper[4740]: E0130 17:12:51.678486 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283c6b72-ae45-4825-b7fd-f3544e93efed" containerName="container-00" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.678508 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="283c6b72-ae45-4825-b7fd-f3544e93efed" containerName="container-00" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.678817 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="283c6b72-ae45-4825-b7fd-f3544e93efed" containerName="container-00" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.681052 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.709375 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.778358 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.778531 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glz8\" (UniqueName: \"kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.778708 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.881417 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.881486 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glz8\" (UniqueName: \"kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.881626 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.882015 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.882080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:51 crc kubenswrapper[4740]: I0130 17:12:51.917504 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glz8\" (UniqueName: \"kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8\") pod \"redhat-marketplace-6rq59\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:52 crc kubenswrapper[4740]: I0130 17:12:52.006756 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:12:52 crc kubenswrapper[4740]: I0130 17:12:52.570805 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:12:53 crc kubenswrapper[4740]: I0130 17:12:53.461790 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerID="1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a" exitCode=0 Jan 30 17:12:53 crc kubenswrapper[4740]: I0130 17:12:53.461896 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerDied","Data":"1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a"} Jan 30 17:12:53 crc kubenswrapper[4740]: I0130 17:12:53.462139 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerStarted","Data":"11d520b53928086e27b8524874d36c6e58dfab2128b01485c15d7cac9a6ac375"} Jan 30 17:12:53 crc kubenswrapper[4740]: I0130 17:12:53.464670 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:12:55 crc kubenswrapper[4740]: I0130 17:12:55.500614 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerStarted","Data":"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4"} Jan 30 17:12:57 crc kubenswrapper[4740]: I0130 17:12:57.335724 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:12:57 crc kubenswrapper[4740]: E0130 17:12:57.336556 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:12:57 crc kubenswrapper[4740]: I0130 17:12:57.530015 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerID="148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4" exitCode=0 Jan 30 17:12:57 crc kubenswrapper[4740]: I0130 17:12:57.530081 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerDied","Data":"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4"} Jan 30 17:12:58 crc kubenswrapper[4740]: I0130 17:12:58.543993 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerStarted","Data":"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55"} Jan 30 17:12:58 crc kubenswrapper[4740]: I0130 17:12:58.586191 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rq59" podStartSLOduration=3.063004251 podStartE2EDuration="7.586165451s" podCreationTimestamp="2026-01-30 17:12:51 +0000 UTC" firstStartedPulling="2026-01-30 17:12:53.464361937 +0000 UTC m=+4622.101424536" lastFinishedPulling="2026-01-30 17:12:57.987523147 +0000 UTC m=+4626.624585736" observedRunningTime="2026-01-30 17:12:58.567468091 +0000 UTC m=+4627.204530700" watchObservedRunningTime="2026-01-30 17:12:58.586165451 +0000 UTC m=+4627.223228050" Jan 30 17:13:02 crc kubenswrapper[4740]: I0130 17:13:02.007757 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:02 crc kubenswrapper[4740]: I0130 17:13:02.008481 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:02 crc kubenswrapper[4740]: I0130 17:13:02.073843 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:12 crc kubenswrapper[4740]: I0130 17:13:12.068229 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:12 crc kubenswrapper[4740]: I0130 17:13:12.139702 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:13:12 crc kubenswrapper[4740]: I0130 17:13:12.336307 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:13:12 crc kubenswrapper[4740]: E0130 17:13:12.336656 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:13:12 crc kubenswrapper[4740]: I0130 17:13:12.703424 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rq59" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="registry-server" containerID="cri-o://8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55" gracePeriod=2 Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.598369 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.720531 4740 generic.go:334] "Generic (PLEG): container finished" podID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerID="8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55" exitCode=0 Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.720606 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerDied","Data":"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55"} Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.720646 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rq59" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.720672 4740 scope.go:117] "RemoveContainer" containerID="8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.720654 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rq59" event={"ID":"bc23cf59-eb2e-4392-b7e9-bef4494db0d3","Type":"ContainerDied","Data":"11d520b53928086e27b8524874d36c6e58dfab2128b01485c15d7cac9a6ac375"} Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.731814 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glz8\" (UniqueName: \"kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8\") pod \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.732239 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content\") pod \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.732468 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities\") pod \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\" (UID: \"bc23cf59-eb2e-4392-b7e9-bef4494db0d3\") " Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.733249 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities" (OuterVolumeSpecName: "utilities") pod "bc23cf59-eb2e-4392-b7e9-bef4494db0d3" (UID: "bc23cf59-eb2e-4392-b7e9-bef4494db0d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.733926 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.762182 4740 scope.go:117] "RemoveContainer" containerID="148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.763761 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8" (OuterVolumeSpecName: "kube-api-access-9glz8") pod "bc23cf59-eb2e-4392-b7e9-bef4494db0d3" (UID: "bc23cf59-eb2e-4392-b7e9-bef4494db0d3"). InnerVolumeSpecName "kube-api-access-9glz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.779555 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc23cf59-eb2e-4392-b7e9-bef4494db0d3" (UID: "bc23cf59-eb2e-4392-b7e9-bef4494db0d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.826502 4740 scope.go:117] "RemoveContainer" containerID="1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.836474 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.836526 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glz8\" (UniqueName: \"kubernetes.io/projected/bc23cf59-eb2e-4392-b7e9-bef4494db0d3-kube-api-access-9glz8\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.897440 4740 scope.go:117] "RemoveContainer" containerID="8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55" Jan 30 17:13:13 crc kubenswrapper[4740]: E0130 17:13:13.898480 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55\": container with ID starting with 8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55 not found: ID does not exist" containerID="8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.898524 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55"} err="failed to get container status \"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55\": rpc error: code = NotFound desc = could not find container \"8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55\": container with ID starting with 8a70400df6a7d15efc6765e1fbd365964f72f30e7f2474a01239cfeb3c023e55 not found: ID does not exist" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.898556 4740 scope.go:117] "RemoveContainer" containerID="148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4" Jan 30 17:13:13 crc kubenswrapper[4740]: E0130 17:13:13.899073 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4\": container with ID starting with 148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4 not found: ID does not exist" containerID="148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.899149 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4"} err="failed to get container status \"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4\": rpc error: code = NotFound desc = could not find container \"148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4\": container with ID starting with 148e1652a8555d2fdc090d4e83a06b2dcbc900a03a99daea059f34b9ccfb8ee4 not found: ID does not exist" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.899192 4740 scope.go:117] "RemoveContainer" containerID="1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a" Jan 30 17:13:13 crc kubenswrapper[4740]: E0130 17:13:13.899630 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a\": container with ID starting with 1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a not found: ID does not exist" containerID="1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a" Jan 30 17:13:13 crc kubenswrapper[4740]: I0130 17:13:13.899668 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a"} err="failed to get container status \"1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a\": rpc error: code = NotFound desc = could not find container \"1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a\": container with ID starting with 1550d35df40640e3b379b60f57059f048529ce2e5bc48068aa5d0f8ddbb21c0a not found: ID does not exist" Jan 30 17:13:14 crc kubenswrapper[4740]: I0130 17:13:14.061637 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:13:14 crc kubenswrapper[4740]: I0130 17:13:14.073845 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rq59"] Jan 30 17:13:15 crc kubenswrapper[4740]: I0130 17:13:15.351814 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" path="/var/lib/kubelet/pods/bc23cf59-eb2e-4392-b7e9-bef4494db0d3/volumes" Jan 30 17:13:26 crc kubenswrapper[4740]: I0130 17:13:26.336573 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:13:26 crc kubenswrapper[4740]: E0130 17:13:26.337672 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:13:40 crc kubenswrapper[4740]: I0130 17:13:40.335602 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:13:40 crc kubenswrapper[4740]: E0130 17:13:40.336528 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:13:41 crc kubenswrapper[4740]: I0130 17:13:41.585778 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/init-config-reloader/0.log" Jan 30 17:13:41 crc kubenswrapper[4740]: I0130 17:13:41.773665 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/init-config-reloader/0.log" Jan 30 17:13:41 crc kubenswrapper[4740]: I0130 17:13:41.847952 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/config-reloader/0.log" Jan 30 17:13:41 crc kubenswrapper[4740]: I0130 17:13:41.858500 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_47e7ebfc-24f9-4946-aace-c402546d5a60/alertmanager/0.log" Jan 30 17:13:41 crc kubenswrapper[4740]: I0130 17:13:41.999637 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c9f844546-g6v8p_6c8ace4b-028d-45a5-af9d-360781681219/barbican-api/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.083331 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c9f844546-g6v8p_6c8ace4b-028d-45a5-af9d-360781681219/barbican-api-log/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.214529 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649cd9f6b8-lgj8x_92b93f04-34e0-47a3-af34-cd7e7717c444/barbican-keystone-listener/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.312971 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-649cd9f6b8-lgj8x_92b93f04-34e0-47a3-af34-cd7e7717c444/barbican-keystone-listener-log/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.357885 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc6d874d7-q46r7_06bc0d0f-04a5-4703-97a4-6d44ccc42006/barbican-worker/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.476274 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc6d874d7-q46r7_06bc0d0f-04a5-4703-97a4-6d44ccc42006/barbican-worker-log/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.590064 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-w67pv_1d25020c-4758-47af-a6c4-5c6cd3c1b74b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.803894 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/ceilometer-central-agent/0.log" Jan 30 17:13:42 crc kubenswrapper[4740]: I0130 17:13:42.907220 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/ceilometer-notification-agent/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.042267 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/proxy-httpd/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.150378 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ca7e8237-6940-4092-8df0-97fa0865cc46/sg-core/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.258307 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22570e91-9697-47f0-81d5-c38551f883b2/cinder-api/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.318495 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_22570e91-9697-47f0-81d5-c38551f883b2/cinder-api-log/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.507094 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_97251097-8f48-4938-ba55-ca2ad0e01a6f/cinder-scheduler/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.618449 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_97251097-8f48-4938-ba55-ca2ad0e01a6f/probe/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.753398 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b1ae2907-297d-49dc-99ed-eda202004650/cloudkitty-api-log/0.log" Jan 30 17:13:43 crc kubenswrapper[4740]: I0130 17:13:43.883097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b1ae2907-297d-49dc-99ed-eda202004650/cloudkitty-api/0.log" Jan 30 17:13:44 crc kubenswrapper[4740]: I0130 17:13:44.463944 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-66dfd9bb-ln5c7_2614d072-47f4-4ed5-bfca-df4e1c46c665/loki-distributor/0.log" Jan 30 17:13:44 crc kubenswrapper[4740]: I0130 17:13:44.492014 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_770634d4-2799-4d23-b96d-9f7fa5286e72/loki-compactor/0.log" Jan 30 17:13:44 crc kubenswrapper[4740]: I0130 17:13:44.690304 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-sp8t2_e2829a20-2177-481a-9a86-73f8bb323661/gateway/0.log" Jan 30 17:13:44 crc kubenswrapper[4740]: I0130 17:13:44.746187 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7db4f4db8c-zzvpt_d46b15b9-9ad3-4699-9358-44d48e09f824/gateway/0.log" Jan 30 17:13:45 crc kubenswrapper[4740]: I0130 17:13:45.118270 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_3e1a7e6f-eb4d-491c-8fa3-3f3da457eec1/loki-ingester/0.log" Jan 30 17:13:45 crc kubenswrapper[4740]: I0130 17:13:45.143425 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_96208f50-7c8d-49c1-b235-def86e2ea52d/loki-index-gateway/0.log" Jan 30 17:13:45 crc kubenswrapper[4740]: I0130 17:13:45.639153 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-795fd8f8cc-z6wx2_471174e9-72cd-40a9-8502-103a233c0dbe/loki-querier/0.log" Jan 30 17:13:45 crc kubenswrapper[4740]: I0130 17:13:45.828941 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-5cd44666df-cwhf4_ba7fe294-1cd9-4cdc-ab0b-9e6d9293cde2/loki-query-frontend/0.log" Jan 30 17:13:46 crc kubenswrapper[4740]: I0130 17:13:46.741447 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-j7s9r_2cf84dba-a4e6-413f-a6d5-81779c179d30/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:46 crc kubenswrapper[4740]: I0130 17:13:46.881247 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-d5lrw_b913a0e7-afaa-4afb-9520-7930587f3b2f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:47 crc kubenswrapper[4740]: I0130 17:13:47.249023 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/init/0.log" Jan 30 17:13:47 crc kubenswrapper[4740]: I0130 17:13:47.686053 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/init/0.log" Jan 30 17:13:47 crc kubenswrapper[4740]: I0130 17:13:47.692700 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvj8d_c63be956-8703-45e6-8b81-1867d602a2d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:47 crc kubenswrapper[4740]: I0130 17:13:47.721212 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-2bdvp_03fa751d-d601-4f94-8cd6-3607c005211c/dnsmasq-dns/0.log" Jan 30 17:13:47 crc kubenswrapper[4740]: I0130 17:13:47.939517 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e3b49e9-60b0-4090-a703-acbc21b9b6b0/glance-log/0.log" Jan 30 17:13:48 crc kubenswrapper[4740]: I0130 17:13:48.030141 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e3b49e9-60b0-4090-a703-acbc21b9b6b0/glance-httpd/0.log" Jan 30 17:13:48 crc kubenswrapper[4740]: I0130 17:13:48.347351 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3d01b7b2-9f95-43e7-abae-1b1acb9c817b/glance-httpd/0.log" Jan 30 17:13:48 crc kubenswrapper[4740]: I0130 17:13:48.455760 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3d01b7b2-9f95-43e7-abae-1b1acb9c817b/glance-log/0.log" Jan 30 17:13:48 crc kubenswrapper[4740]: I0130 17:13:48.562895 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-r9b7f_dafe432a-92c3-4e2a-8e5b-6f4579049269/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:48 crc kubenswrapper[4740]: I0130 17:13:48.931720 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4bsfm_98c07536-da6e-495d-8148-949896f2b4e3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:49 crc kubenswrapper[4740]: I0130 17:13:49.430259 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496541-jkgp5_c0afeb76-6c8f-47c6-81b9-ec569da67517/keystone-cron/0.log" Jan 30 17:13:49 crc kubenswrapper[4740]: I0130 17:13:49.438897 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1c39bfe6-b89f-4699-95ff-e79c94b13740/kube-state-metrics/0.log" Jan 30 17:13:49 crc kubenswrapper[4740]: I0130 17:13:49.574686 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59f5786cfd-w4tqb_7f54d2dc-eb88-4049-8f40-4605058f7feb/keystone-api/0.log" Jan 30 17:13:49 crc kubenswrapper[4740]: I0130 17:13:49.924890 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6z9gp_198ac256-3459-4e44-9c68-9efd25cf1ec5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:50 crc kubenswrapper[4740]: I0130 17:13:50.377044 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586b4b4677-4tdp8_4876d8e9-6662-4958-bb1a-091307ccfd02/neutron-httpd/0.log" Jan 30 17:13:50 crc kubenswrapper[4740]: I0130 17:13:50.499598 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586b4b4677-4tdp8_4876d8e9-6662-4958-bb1a-091307ccfd02/neutron-api/0.log" Jan 30 17:13:50 crc kubenswrapper[4740]: I0130 17:13:50.628820 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nspw_c32077e1-24f2-46ea-868d-914b78472dfe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:51 crc kubenswrapper[4740]: I0130 17:13:51.471115 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2be29a5b-4407-4ef0-bf73-538f62c7ae2e/nova-api-log/0.log" Jan 30 17:13:51 crc kubenswrapper[4740]: I0130 17:13:51.751759 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dfe42dad-1fc3-4802-8d95-2e764a6c2750/nova-cell0-conductor-conductor/0.log" Jan 30 17:13:51 crc kubenswrapper[4740]: I0130 17:13:51.969748 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2be29a5b-4407-4ef0-bf73-538f62c7ae2e/nova-api-api/0.log" Jan 30 17:13:52 crc kubenswrapper[4740]: I0130 17:13:52.353993 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8372d763-1fed-4ff1-a573-ae34f6758115/nova-cell1-conductor-conductor/0.log" Jan 30 17:13:52 crc kubenswrapper[4740]: I0130 17:13:52.779928 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae10fd41-d2ed-4133-a41f-ecab597498fa/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 17:13:52 crc kubenswrapper[4740]: I0130 17:13:52.784905 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-58bwn_ffb086ab-4d15-4da9-babd-b3f544f4a26b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:53 crc kubenswrapper[4740]: I0130 17:13:53.161039 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e1f00ea-3a1e-4684-ad0f-26180738550d/nova-metadata-log/0.log" Jan 30 17:13:53 crc kubenswrapper[4740]: I0130 17:13:53.887030 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_98ae8da8-b7e7-40ca-8116-a91dc003a22c/nova-scheduler-scheduler/0.log" Jan 30 17:13:53 crc kubenswrapper[4740]: I0130 17:13:53.911587 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/mysql-bootstrap/0.log" Jan 30 17:13:54 crc kubenswrapper[4740]: I0130 17:13:54.095130 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/mysql-bootstrap/0.log" Jan 30 17:13:54 crc kubenswrapper[4740]: I0130 17:13:54.176768 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_483203e9-89d7-4b67-b0b9-d0bda08469da/galera/0.log" Jan 30 17:13:54 crc kubenswrapper[4740]: I0130 17:13:54.545032 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/mysql-bootstrap/0.log" Jan 30 17:13:54 crc kubenswrapper[4740]: I0130 17:13:54.862157 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/mysql-bootstrap/0.log" Jan 30 17:13:54 crc kubenswrapper[4740]: I0130 17:13:54.896158 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_09f1ea51-a4df-41eb-a996-f19303114474/galera/0.log" Jan 30 17:13:55 crc kubenswrapper[4740]: I0130 17:13:55.194387 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1d0ac2c5-94e3-4d57-b2b4-31cf33ccf2f0/openstackclient/0.log" Jan 30 17:13:55 crc kubenswrapper[4740]: I0130 17:13:55.335923 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:13:55 crc kubenswrapper[4740]: E0130 17:13:55.336299 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:13:55 crc kubenswrapper[4740]: I0130 17:13:55.486600 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8vhhm_25c16e6c-3931-4064-bf64-baf0759712a5/ovn-controller/0.log" Jan 30 17:13:55 crc kubenswrapper[4740]: I0130 17:13:55.534439 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e1f00ea-3a1e-4684-ad0f-26180738550d/nova-metadata-metadata/0.log" Jan 30 17:13:55 crc kubenswrapper[4740]: I0130 17:13:55.831839 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fpfkt_12656704-b213-40b2-9520-58db055e7380/openstack-network-exporter/0.log" Jan 30 17:13:56 crc kubenswrapper[4740]: I0130 17:13:56.578453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server-init/0.log" Jan 30 17:13:56 crc kubenswrapper[4740]: I0130 17:13:56.907238 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server/0.log" Jan 30 17:13:56 crc kubenswrapper[4740]: I0130 17:13:56.914668 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovsdb-server-init/0.log" Jan 30 17:13:57 crc kubenswrapper[4740]: I0130 17:13:57.002051 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7wnqc_81f43ac7-ed84-4eff-af70-47991eaab066/ovs-vswitchd/0.log" Jan 30 17:13:57 crc kubenswrapper[4740]: I0130 17:13:57.428145 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-7mgcd_bdddad8e-9863-4a79-9883-cd130b7fe9f2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:13:57 crc kubenswrapper[4740]: I0130 17:13:57.511833 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5972ae70-676c-4eca-a931-92f76fe6efe5/openstack-network-exporter/0.log" Jan 30 17:13:57 crc kubenswrapper[4740]: I0130 17:13:57.671572 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5972ae70-676c-4eca-a931-92f76fe6efe5/ovn-northd/0.log" Jan 30 17:13:57 crc kubenswrapper[4740]: I0130 17:13:57.778776 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2182168-2683-42dd-abfc-1d19d9079ca6/openstack-network-exporter/0.log" Jan 30 17:13:58 crc kubenswrapper[4740]: I0130 17:13:58.067780 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c2182168-2683-42dd-abfc-1d19d9079ca6/ovsdbserver-nb/0.log" Jan 30 17:13:58 crc kubenswrapper[4740]: I0130 17:13:58.218759 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_36bbce3a-c121-4811-9a61-ab05b62dce0b/openstack-network-exporter/0.log" Jan 30 17:13:59 crc kubenswrapper[4740]: I0130 17:13:59.146684 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_36bbce3a-c121-4811-9a61-ab05b62dce0b/ovsdbserver-sb/0.log" Jan 30 17:13:59 crc kubenswrapper[4740]: I0130 17:13:59.386045 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7745b764-mmpkw_f4b64e71-6b99-4f78-9636-4996a1e4ecee/placement-api/0.log" Jan 30 17:13:59 crc kubenswrapper[4740]: I0130 17:13:59.655328 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7745b764-mmpkw_f4b64e71-6b99-4f78-9636-4996a1e4ecee/placement-log/0.log" Jan 30 17:13:59 crc kubenswrapper[4740]: I0130 17:13:59.779271 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/init-config-reloader/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.053723 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/prometheus/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.061863 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/init-config-reloader/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.071002 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/config-reloader/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.309846 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/setup-container/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.364206 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9b7e2c82-6c33-432f-b94e-ea939065b33c/thanos-sidecar/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.656645 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/setup-container/0.log" Jan 30 17:14:00 crc kubenswrapper[4740]: I0130 17:14:00.735589 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1e1f0777-9068-4928-a4e8-971dfcbf905c/rabbitmq/0.log" Jan 30 17:14:01 crc kubenswrapper[4740]: I0130 17:14:01.003775 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/setup-container/0.log" Jan 30 17:14:01 crc kubenswrapper[4740]: I0130 17:14:01.314139 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/setup-container/0.log" Jan 30 17:14:01 crc kubenswrapper[4740]: I0130 17:14:01.448647 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83485d04-0a7f-45d0-9a43-66412e5e577e/rabbitmq/0.log" Jan 30 17:14:01 crc kubenswrapper[4740]: I0130 17:14:01.579637 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xg2h6_9f74f942-192f-46c2-b1fd-df038a2fd9e7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:01 crc kubenswrapper[4740]: I0130 17:14:01.811252 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-qwsf4_92f231c6-6140-49b3-89ba-65cf9472a1dd/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:02 crc kubenswrapper[4740]: I0130 17:14:02.168250 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-h9dd6_e8918d38-5722-4b5b-9b52-5a18971aa5f1/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:02 crc kubenswrapper[4740]: I0130 17:14:02.361325 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xqw8f_4f08e5d6-c6a0-40a9-bb51-b45b73d9fa9c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:02 crc kubenswrapper[4740]: I0130 17:14:02.575613 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sklln_f616041d-231d-409f-b1eb-bb0939ada6d6/ssh-known-hosts-edpm-deployment/0.log" Jan 30 17:14:02 crc kubenswrapper[4740]: I0130 17:14:02.979772 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77b9c5655-hbm7j_476176f1-b9ac-4d2d-90ea-7abfcea252c4/proxy-server/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.149498 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77b9c5655-hbm7j_476176f1-b9ac-4d2d-90ea-7abfcea252c4/proxy-httpd/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.352182 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_1f12295c-9646-4ff9-854d-542e75e78e5a/cloudkitty-proc/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.381940 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9q76k_445dee53-61e3-43c6-b8a9-278954f963a2/swift-ring-rebalance/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.478439 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-auditor/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.607045 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-reaper/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.704404 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-replicator/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.743956 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/account-server/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.822443 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-auditor/0.log" Jan 30 17:14:03 crc kubenswrapper[4740]: I0130 17:14:03.990089 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-replicator/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.027815 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-server/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.058451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/container-updater/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.191699 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-auditor/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.339662 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-expirer/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.344447 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-server/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.383477 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-replicator/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.586726 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/object-updater/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.717768 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/rsync/0.log" Jan 30 17:14:04 crc kubenswrapper[4740]: I0130 17:14:04.761284 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_75ff5548-2e68-494b-b131-2b71eb8c9376/swift-recon-cron/0.log" Jan 30 17:14:05 crc kubenswrapper[4740]: I0130 17:14:05.236554 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b74gj_26ccd837-ffdb-4155-b2ad-032ef3dfa49e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:05 crc kubenswrapper[4740]: I0130 17:14:05.369075 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_bbb613b4-f2f3-4388-ae48-986e0281000f/tempest-tests-tempest-tests-runner/0.log" Jan 30 17:14:05 crc kubenswrapper[4740]: I0130 17:14:05.511733 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7681e657-e354-4e35-8cd2-351cc51fdb4a/test-operator-logs-container/0.log" Jan 30 17:14:05 crc kubenswrapper[4740]: I0130 17:14:05.680932 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9rt4b_a5fa2ffd-a5ba-47a0-a095-bd8219667aa3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 17:14:07 crc kubenswrapper[4740]: I0130 17:14:07.340092 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:14:07 crc kubenswrapper[4740]: E0130 17:14:07.341474 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:14:10 crc kubenswrapper[4740]: I0130 17:14:10.915454 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_20cc1f1a-e021-42dd-b435-64eaf9cfa1d7/memcached/0.log" Jan 30 17:14:18 crc kubenswrapper[4740]: I0130 17:14:18.335225 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:14:18 crc kubenswrapper[4740]: E0130 17:14:18.337303 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:14:32 crc kubenswrapper[4740]: I0130 17:14:32.335740 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:14:32 crc kubenswrapper[4740]: I0130 17:14:32.722185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7"} Jan 30 17:14:42 crc kubenswrapper[4740]: I0130 17:14:42.434622 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:14:42 crc kubenswrapper[4740]: I0130 17:14:42.751758 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:14:42 crc kubenswrapper[4740]: I0130 17:14:42.769914 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:14:42 crc kubenswrapper[4740]: I0130 17:14:42.821269 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.061223 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/util/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.061747 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/pull/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.093855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1059e1f694d4c57dc40c917701a1ebe8a45b0a91d1c786ce5a783c2dfergwh9_e62c1d15-4087-4ac5-85e0-7982f249c1a3/extract/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.323626 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-tzdc2_b3f3f690-263c-406b-9651-b1d548a73010/manager/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.415551 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-xsqtm_4ffa4d95-fc8d-4352-9bb3-b74038d53453/manager/0.log" Jan 30 17:14:43 crc kubenswrapper[4740]: I0130 17:14:43.575063 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-q652d_7a1d5aff-da4c-4c0e-9616-44da3511eef2/manager/0.log" Jan 30 17:14:44 crc kubenswrapper[4740]: I0130 17:14:44.072294 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2cj65_de27448d-0b23-4bbb-81b2-7818361e53bf/manager/0.log" Jan 30 17:14:44 crc kubenswrapper[4740]: I0130 17:14:44.075960 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-jjtfm_9fa5493f-2e76-4fda-9a43-4d8e7828f2a7/manager/0.log" Jan 30 17:14:44 crc kubenswrapper[4740]: I0130 17:14:44.280460 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-g8sm9_97e430a6-ad51-4e80-999e-75e568b1d6b6/manager/0.log" Jan 30 17:14:44 crc kubenswrapper[4740]: I0130 17:14:44.605770 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-nwnsv_ac86533b-0c5a-4704-b497-6e7e1114d938/manager/0.log" Jan 30 17:14:44 crc kubenswrapper[4740]: I0130 17:14:44.621570 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6wz9h_736c30f6-a1e4-47aa-a6d0-713baf99ad69/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.002505 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-d4hf5_88b0bde4-cd5b-4d3e-85aa-d2daac3eac2c/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.003809 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-w7jt2_b9648635-827e-4a21-8890-ba8b1772d7c4/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.299947 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-lqf5n_c35e116f-97e5-47ec-aa40-955321cb09d5/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.312429 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-v8885_b82bfd4e-e72e-4941-b8aa-1baae2433217/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.540141 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-bzjc5_6ba6b433-534d-4a14-9fbb-4418b1c39fd9/manager/0.log" Jan 30 17:14:45 crc kubenswrapper[4740]: I0130 17:14:45.570504 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-tl627_a7ff8a9d-40f9-4354-aa10-e7e93907a0a5/manager/0.log" Jan 30 17:14:46 crc kubenswrapper[4740]: I0130 17:14:46.393702 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlkhcg_107dde7f-ab99-4981-ba7a-0c6756408b54/manager/0.log" Jan 30 17:14:46 crc kubenswrapper[4740]: I0130 17:14:46.401490 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-8d66f78b7-26k2v_72ae6a1c-defc-4fa0-8526-6fa59b0b2138/operator/0.log" Jan 30 17:14:46 crc kubenswrapper[4740]: I0130 17:14:46.643594 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rwqps_6d98558b-10cb-4d22-ac8e-4db35ad5b364/registry-server/0.log" Jan 30 17:14:47 crc kubenswrapper[4740]: I0130 17:14:47.163198 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-dpf55_5a6574e0-d6db-4e3d-9203-c3b28694e68f/manager/0.log" Jan 30 17:14:47 crc kubenswrapper[4740]: I0130 17:14:47.224218 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-hszqm_b8a01322-677f-443a-83fd-6352c7523727/manager/0.log" Jan 30 17:14:47 crc kubenswrapper[4740]: I0130 17:14:47.440243 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bmhgb_0040ed18-716a-4452-8209-c45c497d7fae/operator/0.log" Jan 30 17:14:47 crc kubenswrapper[4740]: I0130 17:14:47.549764 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-92p8l_011e9da6-1efe-4002-91f3-0aa0923fa015/manager/0.log" Jan 30 17:14:47 crc kubenswrapper[4740]: I0130 17:14:47.936773 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-gtt5t_82688ddf-9d92-4ff1-873b-ca5766766189/manager/0.log" Jan 30 17:14:48 crc kubenswrapper[4740]: I0130 17:14:48.064227 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bdc979b86-rndp8_4b1298c0-d749-42f3-97c1-ad1b19db8f96/manager/0.log" Jan 30 17:14:48 crc kubenswrapper[4740]: I0130 17:14:48.178782 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-6sl24_d6ebfaaf-00f6-430e-bcb2-b5041395a101/manager/0.log" Jan 30 17:14:48 crc kubenswrapper[4740]: I0130 17:14:48.319670 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-df45f6d5f-lc4fv_82b9c083-1154-46de-958e-6a7726aca988/manager/0.log" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.190066 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq"] Jan 30 17:15:00 crc kubenswrapper[4740]: E0130 17:15:00.191543 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.191571 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="extract-utilities" Jan 30 17:15:00 crc kubenswrapper[4740]: E0130 17:15:00.191606 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.191615 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4740]: E0130 17:15:00.191652 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.191665 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="extract-content" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.191969 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc23cf59-eb2e-4392-b7e9-bef4494db0d3" containerName="registry-server" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.193162 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.197921 4740 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.203928 4740 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.205088 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq"] Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.323131 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.323506 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gzj\" (UniqueName: \"kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.323627 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.425511 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8gzj\" (UniqueName: \"kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.425745 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.425791 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.427125 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.439294 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.454759 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8gzj\" (UniqueName: \"kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj\") pod \"collect-profiles-29496555-ln7qq\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:00 crc kubenswrapper[4740]: I0130 17:15:00.529137 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:01 crc kubenswrapper[4740]: I0130 17:15:01.098129 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq"] Jan 30 17:15:02 crc kubenswrapper[4740]: I0130 17:15:02.117556 4740 generic.go:334] "Generic (PLEG): container finished" podID="341f1dde-1497-456e-b5f5-d7551e36d49b" containerID="509dcad558978237051bb13e25302bd0c4d93cead16ca69f089e33a2daab57cd" exitCode=0 Jan 30 17:15:02 crc kubenswrapper[4740]: I0130 17:15:02.117683 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" event={"ID":"341f1dde-1497-456e-b5f5-d7551e36d49b","Type":"ContainerDied","Data":"509dcad558978237051bb13e25302bd0c4d93cead16ca69f089e33a2daab57cd"} Jan 30 17:15:02 crc kubenswrapper[4740]: I0130 17:15:02.117985 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" event={"ID":"341f1dde-1497-456e-b5f5-d7551e36d49b","Type":"ContainerStarted","Data":"b4cbdb2d00a1f51f74bb11ad751869c46755b76f761298a9e20babd416a6f1be"} Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.819736 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.915727 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8gzj\" (UniqueName: \"kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj\") pod \"341f1dde-1497-456e-b5f5-d7551e36d49b\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.916089 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume\") pod \"341f1dde-1497-456e-b5f5-d7551e36d49b\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.916300 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume\") pod \"341f1dde-1497-456e-b5f5-d7551e36d49b\" (UID: \"341f1dde-1497-456e-b5f5-d7551e36d49b\") " Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.917245 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume" (OuterVolumeSpecName: "config-volume") pod "341f1dde-1497-456e-b5f5-d7551e36d49b" (UID: "341f1dde-1497-456e-b5f5-d7551e36d49b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.922577 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj" (OuterVolumeSpecName: "kube-api-access-x8gzj") pod "341f1dde-1497-456e-b5f5-d7551e36d49b" (UID: "341f1dde-1497-456e-b5f5-d7551e36d49b"). InnerVolumeSpecName "kube-api-access-x8gzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4740]: I0130 17:15:03.923605 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "341f1dde-1497-456e-b5f5-d7551e36d49b" (UID: "341f1dde-1497-456e-b5f5-d7551e36d49b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.018751 4740 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/341f1dde-1497-456e-b5f5-d7551e36d49b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.018791 4740 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/341f1dde-1497-456e-b5f5-d7551e36d49b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.018806 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8gzj\" (UniqueName: \"kubernetes.io/projected/341f1dde-1497-456e-b5f5-d7551e36d49b-kube-api-access-x8gzj\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.145387 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" event={"ID":"341f1dde-1497-456e-b5f5-d7551e36d49b","Type":"ContainerDied","Data":"b4cbdb2d00a1f51f74bb11ad751869c46755b76f761298a9e20babd416a6f1be"} Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.145428 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-ln7qq" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.145477 4740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4cbdb2d00a1f51f74bb11ad751869c46755b76f761298a9e20babd416a6f1be" Jan 30 17:15:04 crc kubenswrapper[4740]: I0130 17:15:04.986440 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz"] Jan 30 17:15:05 crc kubenswrapper[4740]: I0130 17:15:05.016572 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496510-dmpfz"] Jan 30 17:15:05 crc kubenswrapper[4740]: I0130 17:15:05.349765 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03" path="/var/lib/kubelet/pods/e5e3bac8-b9c1-437c-8c01-9e9a87c5bf03/volumes" Jan 30 17:15:17 crc kubenswrapper[4740]: I0130 17:15:17.292379 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-lpktp_0336ee48-8f1e-49ed-a021-a01446330b39/control-plane-machine-set-operator/0.log" Jan 30 17:15:17 crc kubenswrapper[4740]: I0130 17:15:17.529343 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dl6xs_be7f0e88-7c2e-4c1b-a617-9da27584b057/machine-api-operator/0.log" Jan 30 17:15:17 crc kubenswrapper[4740]: I0130 17:15:17.601671 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dl6xs_be7f0e88-7c2e-4c1b-a617-9da27584b057/kube-rbac-proxy/0.log" Jan 30 17:15:37 crc kubenswrapper[4740]: I0130 17:15:37.987971 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-4xwlh_38135331-191d-4ef6-a002-936b6b4a17b3/cert-manager-controller/0.log" Jan 30 17:15:38 crc kubenswrapper[4740]: I0130 17:15:38.251816 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7dg58_72764858-c1a4-408a-887a-c48ad0b4d10a/cert-manager-cainjector/0.log" Jan 30 17:15:38 crc kubenswrapper[4740]: I0130 17:15:38.340517 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mn7d8_3979d983-a849-4be3-a862-caed0065a705/cert-manager-webhook/0.log" Jan 30 17:15:38 crc kubenswrapper[4740]: I0130 17:15:38.879643 4740 scope.go:117] "RemoveContainer" containerID="9681d2b6e2ead397d583382cb362f3270e511d79abe6d6f4c679c22925fbc0d3" Jan 30 17:15:46 crc kubenswrapper[4740]: I0130 17:15:46.845598 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:15:46 crc kubenswrapper[4740]: E0130 17:15:46.846737 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341f1dde-1497-456e-b5f5-d7551e36d49b" containerName="collect-profiles" Jan 30 17:15:46 crc kubenswrapper[4740]: I0130 17:15:46.846756 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="341f1dde-1497-456e-b5f5-d7551e36d49b" containerName="collect-profiles" Jan 30 17:15:46 crc kubenswrapper[4740]: I0130 17:15:46.847020 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="341f1dde-1497-456e-b5f5-d7551e36d49b" containerName="collect-profiles" Jan 30 17:15:46 crc kubenswrapper[4740]: I0130 17:15:46.854752 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:46 crc kubenswrapper[4740]: I0130 17:15:46.891256 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.004273 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.004513 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtnx\" (UniqueName: \"kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.004620 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.106457 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.106595 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.106700 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdtnx\" (UniqueName: \"kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.107727 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.107777 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.130099 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdtnx\" (UniqueName: \"kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx\") pod \"redhat-operators-ldfv8\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.186453 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:15:47 crc kubenswrapper[4740]: I0130 17:15:47.752245 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:15:48 crc kubenswrapper[4740]: I0130 17:15:48.715250 4740 generic.go:334] "Generic (PLEG): container finished" podID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerID="0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933" exitCode=0 Jan 30 17:15:48 crc kubenswrapper[4740]: I0130 17:15:48.715346 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerDied","Data":"0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933"} Jan 30 17:15:48 crc kubenswrapper[4740]: I0130 17:15:48.715640 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerStarted","Data":"4cd1a4bc00b8f385aa624c92b4855b4a121fbad070d04f445e4b7ed1b9a9a45f"} Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.763398 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerStarted","Data":"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40"} Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.832994 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.836074 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.847774 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.973096 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.973162 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:52 crc kubenswrapper[4740]: I0130 17:15:52.973187 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64m75\" (UniqueName: \"kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.075505 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.075596 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.075639 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64m75\" (UniqueName: \"kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.076413 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.076881 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.100685 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64m75\" (UniqueName: \"kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75\") pod \"certified-operators-829cr\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:53 crc kubenswrapper[4740]: I0130 17:15:53.178382 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:15:54 crc kubenswrapper[4740]: I0130 17:15:54.032699 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:15:54 crc kubenswrapper[4740]: W0130 17:15:54.036974 4740 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode341d963_5ce1_4f81_945e_cc43f69acae4.slice/crio-aa3528e802fc9bb6642183e2fdb93aac7e2fae7ecf17650557d4da6066178b63 WatchSource:0}: Error finding container aa3528e802fc9bb6642183e2fdb93aac7e2fae7ecf17650557d4da6066178b63: Status 404 returned error can't find the container with id aa3528e802fc9bb6642183e2fdb93aac7e2fae7ecf17650557d4da6066178b63 Jan 30 17:15:54 crc kubenswrapper[4740]: I0130 17:15:54.804954 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerStarted","Data":"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7"} Jan 30 17:15:54 crc kubenswrapper[4740]: I0130 17:15:54.805266 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerStarted","Data":"aa3528e802fc9bb6642183e2fdb93aac7e2fae7ecf17650557d4da6066178b63"} Jan 30 17:15:55 crc kubenswrapper[4740]: I0130 17:15:55.820786 4740 generic.go:334] "Generic (PLEG): container finished" podID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerID="5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7" exitCode=0 Jan 30 17:15:55 crc kubenswrapper[4740]: I0130 17:15:55.820889 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerDied","Data":"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7"} Jan 30 17:15:55 crc kubenswrapper[4740]: I0130 17:15:55.821324 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerStarted","Data":"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f"} Jan 30 17:16:02 crc kubenswrapper[4740]: I0130 17:16:02.405501 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-b7cbr_0f05dfb1-ebdb-4b8d-8699-1b254807132b/nmstate-console-plugin/0.log" Jan 30 17:16:02 crc kubenswrapper[4740]: I0130 17:16:02.692558 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hs8jj_479ee03d-d745-43ab-83d0-46f6e4cf1a21/nmstate-handler/0.log" Jan 30 17:16:02 crc kubenswrapper[4740]: I0130 17:16:02.885193 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lb5hz_01faa5ac-05c7-44cf-a393-e67e5e47c683/kube-rbac-proxy/0.log" Jan 30 17:16:03 crc kubenswrapper[4740]: I0130 17:16:03.024423 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lb5hz_01faa5ac-05c7-44cf-a393-e67e5e47c683/nmstate-metrics/0.log" Jan 30 17:16:03 crc kubenswrapper[4740]: I0130 17:16:03.121460 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-dw9f7_27f0888f-27f8-4ebe-86ed-a07a0995a241/nmstate-operator/0.log" Jan 30 17:16:03 crc kubenswrapper[4740]: I0130 17:16:03.303171 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-tf56m_3ba55d47-87a4-4a5e-b3a7-9a737aef9125/nmstate-webhook/0.log" Jan 30 17:16:03 crc kubenswrapper[4740]: I0130 17:16:03.949843 4740 generic.go:334] "Generic (PLEG): container finished" podID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerID="534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f" exitCode=0 Jan 30 17:16:03 crc kubenswrapper[4740]: I0130 17:16:03.949905 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerDied","Data":"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f"} Jan 30 17:16:05 crc kubenswrapper[4740]: I0130 17:16:05.973611 4740 generic.go:334] "Generic (PLEG): container finished" podID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerID="ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40" exitCode=0 Jan 30 17:16:05 crc kubenswrapper[4740]: I0130 17:16:05.973695 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerDied","Data":"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40"} Jan 30 17:16:05 crc kubenswrapper[4740]: I0130 17:16:05.983508 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerStarted","Data":"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1"} Jan 30 17:16:06 crc kubenswrapper[4740]: I0130 17:16:06.043205 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-829cr" podStartSLOduration=4.165582357 podStartE2EDuration="14.043172307s" podCreationTimestamp="2026-01-30 17:15:52 +0000 UTC" firstStartedPulling="2026-01-30 17:15:54.810847632 +0000 UTC m=+4803.447910231" lastFinishedPulling="2026-01-30 17:16:04.688437582 +0000 UTC m=+4813.325500181" observedRunningTime="2026-01-30 17:16:06.030308324 +0000 UTC m=+4814.667370933" watchObservedRunningTime="2026-01-30 17:16:06.043172307 +0000 UTC m=+4814.680234906" Jan 30 17:16:08 crc kubenswrapper[4740]: I0130 17:16:08.017422 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerStarted","Data":"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab"} Jan 30 17:16:08 crc kubenswrapper[4740]: I0130 17:16:08.047434 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ldfv8" podStartSLOduration=4.289540518 podStartE2EDuration="22.047408305s" podCreationTimestamp="2026-01-30 17:15:46 +0000 UTC" firstStartedPulling="2026-01-30 17:15:48.717692116 +0000 UTC m=+4797.354754715" lastFinishedPulling="2026-01-30 17:16:06.475559903 +0000 UTC m=+4815.112622502" observedRunningTime="2026-01-30 17:16:08.036606853 +0000 UTC m=+4816.673669462" watchObservedRunningTime="2026-01-30 17:16:08.047408305 +0000 UTC m=+4816.684470904" Jan 30 17:16:13 crc kubenswrapper[4740]: I0130 17:16:13.179492 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:13 crc kubenswrapper[4740]: I0130 17:16:13.180141 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:14 crc kubenswrapper[4740]: I0130 17:16:14.232727 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-829cr" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:14 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:14 crc kubenswrapper[4740]: > Jan 30 17:16:17 crc kubenswrapper[4740]: I0130 17:16:17.186838 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:16:17 crc kubenswrapper[4740]: I0130 17:16:17.187339 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:16:18 crc kubenswrapper[4740]: I0130 17:16:18.241069 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:18 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:18 crc kubenswrapper[4740]: > Jan 30 17:16:24 crc kubenswrapper[4740]: I0130 17:16:24.818278 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-829cr" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:24 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:24 crc kubenswrapper[4740]: > Jan 30 17:16:25 crc kubenswrapper[4740]: I0130 17:16:25.501140 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/kube-rbac-proxy/0.log" Jan 30 17:16:25 crc kubenswrapper[4740]: I0130 17:16:25.701219 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/manager/0.log" Jan 30 17:16:28 crc kubenswrapper[4740]: I0130 17:16:28.238020 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:28 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:28 crc kubenswrapper[4740]: > Jan 30 17:16:33 crc kubenswrapper[4740]: I0130 17:16:33.239784 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:33 crc kubenswrapper[4740]: I0130 17:16:33.303451 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:33 crc kubenswrapper[4740]: I0130 17:16:33.482873 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:16:34 crc kubenswrapper[4740]: I0130 17:16:34.342212 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-829cr" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" containerID="cri-o://dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1" gracePeriod=2 Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.189511 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.239620 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64m75\" (UniqueName: \"kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75\") pod \"e341d963-5ce1-4f81-945e-cc43f69acae4\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.239731 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content\") pod \"e341d963-5ce1-4f81-945e-cc43f69acae4\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.239785 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities\") pod \"e341d963-5ce1-4f81-945e-cc43f69acae4\" (UID: \"e341d963-5ce1-4f81-945e-cc43f69acae4\") " Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.241675 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities" (OuterVolumeSpecName: "utilities") pod "e341d963-5ce1-4f81-945e-cc43f69acae4" (UID: "e341d963-5ce1-4f81-945e-cc43f69acae4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.247591 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75" (OuterVolumeSpecName: "kube-api-access-64m75") pod "e341d963-5ce1-4f81-945e-cc43f69acae4" (UID: "e341d963-5ce1-4f81-945e-cc43f69acae4"). InnerVolumeSpecName "kube-api-access-64m75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.300162 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e341d963-5ce1-4f81-945e-cc43f69acae4" (UID: "e341d963-5ce1-4f81-945e-cc43f69acae4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.345840 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64m75\" (UniqueName: \"kubernetes.io/projected/e341d963-5ce1-4f81-945e-cc43f69acae4-kube-api-access-64m75\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.345897 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.345908 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e341d963-5ce1-4f81-945e-cc43f69acae4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.355723 4740 generic.go:334] "Generic (PLEG): container finished" podID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerID="dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1" exitCode=0 Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.355789 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerDied","Data":"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1"} Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.356101 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-829cr" event={"ID":"e341d963-5ce1-4f81-945e-cc43f69acae4","Type":"ContainerDied","Data":"aa3528e802fc9bb6642183e2fdb93aac7e2fae7ecf17650557d4da6066178b63"} Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.356132 4740 scope.go:117] "RemoveContainer" containerID="dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.355804 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-829cr" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.390970 4740 scope.go:117] "RemoveContainer" containerID="534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.405113 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.433510 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-829cr"] Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.437511 4740 scope.go:117] "RemoveContainer" containerID="5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.539743 4740 scope.go:117] "RemoveContainer" containerID="dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1" Jan 30 17:16:35 crc kubenswrapper[4740]: E0130 17:16:35.541088 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1\": container with ID starting with dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1 not found: ID does not exist" containerID="dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.541272 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1"} err="failed to get container status \"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1\": rpc error: code = NotFound desc = could not find container \"dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1\": container with ID starting with dcfa44f8106125d7f306d9f5243940925486a3e6fb7dd5c8728ad0b4b5bd33b1 not found: ID does not exist" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.541434 4740 scope.go:117] "RemoveContainer" containerID="534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f" Jan 30 17:16:35 crc kubenswrapper[4740]: E0130 17:16:35.544309 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f\": container with ID starting with 534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f not found: ID does not exist" containerID="534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.544623 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f"} err="failed to get container status \"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f\": rpc error: code = NotFound desc = could not find container \"534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f\": container with ID starting with 534d8a90696d9828c477d54b9bb4049c030bd50ae1995c02aa6f3ec50adcfe3f not found: ID does not exist" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.544757 4740 scope.go:117] "RemoveContainer" containerID="5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7" Jan 30 17:16:35 crc kubenswrapper[4740]: E0130 17:16:35.546267 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7\": container with ID starting with 5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7 not found: ID does not exist" containerID="5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7" Jan 30 17:16:35 crc kubenswrapper[4740]: I0130 17:16:35.546338 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7"} err="failed to get container status \"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7\": rpc error: code = NotFound desc = could not find container \"5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7\": container with ID starting with 5b4c0bb9a6453c10859cc78f833b27e971747db51892c976c0a4940735861ae7 not found: ID does not exist" Jan 30 17:16:37 crc kubenswrapper[4740]: I0130 17:16:37.350219 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" path="/var/lib/kubelet/pods/e341d963-5ce1-4f81-945e-cc43f69acae4/volumes" Jan 30 17:16:38 crc kubenswrapper[4740]: I0130 17:16:38.247714 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:38 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:38 crc kubenswrapper[4740]: > Jan 30 17:16:46 crc kubenswrapper[4740]: I0130 17:16:46.788963 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fl52v_7fed297b-1b60-4fa1-81ad-f7aff661624d/prometheus-operator/0.log" Jan 30 17:16:47 crc kubenswrapper[4740]: I0130 17:16:47.251946 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_e70968d1-7497-4724-9c80-cf5abdf288ea/prometheus-operator-admission-webhook/0.log" Jan 30 17:16:47 crc kubenswrapper[4740]: I0130 17:16:47.297755 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_27f815e6-2917-46af-8a6d-4bcd66c35042/prometheus-operator-admission-webhook/0.log" Jan 30 17:16:47 crc kubenswrapper[4740]: I0130 17:16:47.918579 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pdgvg_6a0acde2-70b4-4622-a609-290cbc5f253f/operator/0.log" Jan 30 17:16:47 crc kubenswrapper[4740]: I0130 17:16:47.984449 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r2zbm_522756c7-f451-4879-b2b3-2d19b80cb751/perses-operator/0.log" Jan 30 17:16:48 crc kubenswrapper[4740]: I0130 17:16:48.647667 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:48 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:48 crc kubenswrapper[4740]: > Jan 30 17:16:54 crc kubenswrapper[4740]: I0130 17:16:54.455002 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:16:54 crc kubenswrapper[4740]: I0130 17:16:54.455680 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:16:58 crc kubenswrapper[4740]: I0130 17:16:58.268563 4740 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:16:58 crc kubenswrapper[4740]: timeout: failed to connect service ":50051" within 1s Jan 30 17:16:58 crc kubenswrapper[4740]: > Jan 30 17:17:07 crc kubenswrapper[4740]: I0130 17:17:07.250653 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:17:07 crc kubenswrapper[4740]: I0130 17:17:07.311656 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:17:07 crc kubenswrapper[4740]: I0130 17:17:07.531245 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:17:08 crc kubenswrapper[4740]: I0130 17:17:08.847506 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ldfv8" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" containerID="cri-o://3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab" gracePeriod=2 Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.815616 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.924861 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities\") pod \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.924915 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content\") pod \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.925015 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdtnx\" (UniqueName: \"kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx\") pod \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\" (UID: \"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a\") " Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.926545 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities" (OuterVolumeSpecName: "utilities") pod "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" (UID: "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.929901 4740 generic.go:334] "Generic (PLEG): container finished" podID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerID="3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab" exitCode=0 Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.929955 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerDied","Data":"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab"} Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.929990 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ldfv8" event={"ID":"b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a","Type":"ContainerDied","Data":"4cd1a4bc00b8f385aa624c92b4855b4a121fbad070d04f445e4b7ed1b9a9a45f"} Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.930010 4740 scope.go:117] "RemoveContainer" containerID="3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab" Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.930238 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ldfv8" Jan 30 17:17:09 crc kubenswrapper[4740]: I0130 17:17:09.978565 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx" (OuterVolumeSpecName: "kube-api-access-tdtnx") pod "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" (UID: "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a"). InnerVolumeSpecName "kube-api-access-tdtnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.027789 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.027890 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdtnx\" (UniqueName: \"kubernetes.io/projected/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-kube-api-access-tdtnx\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.034876 4740 scope.go:117] "RemoveContainer" containerID="ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.104631 4740 scope.go:117] "RemoveContainer" containerID="0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.153094 4740 scope.go:117] "RemoveContainer" containerID="3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.153521 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" (UID: "b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:10 crc kubenswrapper[4740]: E0130 17:17:10.153756 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab\": container with ID starting with 3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab not found: ID does not exist" containerID="3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.153807 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab"} err="failed to get container status \"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab\": rpc error: code = NotFound desc = could not find container \"3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab\": container with ID starting with 3cade244da5a9f82ee914eb4d904f5ff9dfb96c3d01729535d663c99bbd862ab not found: ID does not exist" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.153837 4740 scope.go:117] "RemoveContainer" containerID="ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40" Jan 30 17:17:10 crc kubenswrapper[4740]: E0130 17:17:10.154635 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40\": container with ID starting with ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40 not found: ID does not exist" containerID="ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.154765 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40"} err="failed to get container status \"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40\": rpc error: code = NotFound desc = could not find container \"ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40\": container with ID starting with ee5bee8de6705a0c5f4d3dfa962c52b4e7aaf396e4b540d76247a96c1b6c7e40 not found: ID does not exist" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.154802 4740 scope.go:117] "RemoveContainer" containerID="0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933" Jan 30 17:17:10 crc kubenswrapper[4740]: E0130 17:17:10.155407 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933\": container with ID starting with 0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933 not found: ID does not exist" containerID="0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.155436 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933"} err="failed to get container status \"0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933\": rpc error: code = NotFound desc = could not find container \"0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933\": container with ID starting with 0ccdf9816c44dd306bcd1156650377be0be20c1e3ecff4fc218595f44d4dc933 not found: ID does not exist" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.233530 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.274717 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:17:10 crc kubenswrapper[4740]: I0130 17:17:10.289755 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ldfv8"] Jan 30 17:17:11 crc kubenswrapper[4740]: I0130 17:17:11.350263 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" path="/var/lib/kubelet/pods/b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a/volumes" Jan 30 17:17:11 crc kubenswrapper[4740]: I0130 17:17:11.935307 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-spzrl_2516854f-e7b5-4af2-a473-72ad1644043a/controller/0.log" Jan 30 17:17:11 crc kubenswrapper[4740]: I0130 17:17:11.947874 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-spzrl_2516854f-e7b5-4af2-a473-72ad1644043a/kube-rbac-proxy/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.165636 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.414242 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.416106 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.439632 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.443195 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.782322 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.783292 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.786533 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:17:12 crc kubenswrapper[4740]: I0130 17:17:12.809531 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.006816 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-reloader/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.021098 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-frr-files/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.079661 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/cp-metrics/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.197135 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/controller/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.280854 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/frr-metrics/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.902794 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/kube-rbac-proxy-frr/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.951072 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/kube-rbac-proxy/0.log" Jan 30 17:17:13 crc kubenswrapper[4740]: I0130 17:17:13.989096 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/reloader/0.log" Jan 30 17:17:14 crc kubenswrapper[4740]: I0130 17:17:14.339753 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-kj46g_229e5897-0e63-4b65-8142-77d97ef63ca3/frr-k8s-webhook-server/0.log" Jan 30 17:17:14 crc kubenswrapper[4740]: I0130 17:17:14.688993 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f44447989-gnfds_7e49666f-5b34-430c-bfa4-c85208433cda/webhook-server/0.log" Jan 30 17:17:14 crc kubenswrapper[4740]: I0130 17:17:14.689693 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-679cd9954d-7f5xw_de72a0d2-8f4e-442e-99e0-8179782f810b/manager/0.log" Jan 30 17:17:14 crc kubenswrapper[4740]: I0130 17:17:14.818954 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bmfdq_2602e38b-af1a-4ece-8430-1c1ba3fe5044/frr/0.log" Jan 30 17:17:15 crc kubenswrapper[4740]: I0130 17:17:15.138334 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbsrp_d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9/kube-rbac-proxy/0.log" Jan 30 17:17:15 crc kubenswrapper[4740]: I0130 17:17:15.511169 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lbsrp_d2a72191-5c19-4a76-bc9d-6d5a2d07e8d9/speaker/0.log" Jan 30 17:17:24 crc kubenswrapper[4740]: I0130 17:17:24.455093 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:24 crc kubenswrapper[4740]: I0130 17:17:24.455733 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.546810 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548121 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="extract-content" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548143 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="extract-content" Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548208 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="extract-utilities" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548219 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="extract-utilities" Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548232 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="extract-content" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548241 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="extract-content" Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548273 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="extract-utilities" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548280 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="extract-utilities" Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548297 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548303 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: E0130 17:17:30.548316 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548324 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548603 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49a0cc8-ff5e-4fbe-904b-4c2a52115b8a" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.548646 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="e341d963-5ce1-4f81-945e-cc43f69acae4" containerName="registry-server" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.550595 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.571401 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.656923 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.657047 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfl9\" (UniqueName: \"kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.657141 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.760253 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.760375 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kfl9\" (UniqueName: \"kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.760449 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.761014 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.761140 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.798080 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kfl9\" (UniqueName: \"kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9\") pod \"community-operators-cfp2s\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:30 crc kubenswrapper[4740]: I0130 17:17:30.878531 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:31 crc kubenswrapper[4740]: I0130 17:17:31.576121 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:32 crc kubenswrapper[4740]: I0130 17:17:32.202612 4740 generic.go:334] "Generic (PLEG): container finished" podID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerID="a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2" exitCode=0 Jan 30 17:17:32 crc kubenswrapper[4740]: I0130 17:17:32.202843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerDied","Data":"a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2"} Jan 30 17:17:32 crc kubenswrapper[4740]: I0130 17:17:32.202947 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerStarted","Data":"f2716468b09c9137c3c5f3017baf9cf64b2d8e7f134a9d47ad11eb1fa62a004a"} Jan 30 17:17:32 crc kubenswrapper[4740]: I0130 17:17:32.740279 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.055867 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.076766 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.112467 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.413159 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/extract/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.427021 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/util/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.502274 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9sm5h_e4ef49bf-103d-4989-a9aa-52c98c542c3d/pull/0.log" Jan 30 17:17:33 crc kubenswrapper[4740]: I0130 17:17:33.671472 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.000250 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.017538 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.046791 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.231014 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerStarted","Data":"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45"} Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.321710 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/extract/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.385193 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/pull/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.389180 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3e572a74f8b8ca2bcfe04329d4f26bd9689911be5d166a7403bd6ae773ll87f_1dd39293-3572-4079-8f91-9f6549e8304d/util/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.590683 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:17:34 crc kubenswrapper[4740]: I0130 17:17:34.943248 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.016990 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.029217 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: E0130 17:17:35.088960 4740 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2655bd57_bfb0_4633_9f39_a540ba48c452.slice/crio-01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.249907 4740 generic.go:334] "Generic (PLEG): container finished" podID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerID="01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45" exitCode=0 Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.250033 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerDied","Data":"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45"} Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.350093 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/extract/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.359470 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/util/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.468453 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134tlg2_d14500ed-3452-479b-b86a-d000ba46cdc5/pull/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.684137 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:17:35 crc kubenswrapper[4740]: I0130 17:17:35.937462 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.009446 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.063417 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.265253 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerStarted","Data":"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91"} Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.295687 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cfp2s" podStartSLOduration=2.838970694 podStartE2EDuration="6.295653463s" podCreationTimestamp="2026-01-30 17:17:30 +0000 UTC" firstStartedPulling="2026-01-30 17:17:32.204844439 +0000 UTC m=+4900.841907038" lastFinishedPulling="2026-01-30 17:17:35.661527208 +0000 UTC m=+4904.298589807" observedRunningTime="2026-01-30 17:17:36.287295143 +0000 UTC m=+4904.924357732" watchObservedRunningTime="2026-01-30 17:17:36.295653463 +0000 UTC m=+4904.932716082" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.367985 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/util/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.401269 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/pull/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.568036 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gzwgv_d36022b6-9743-4c56-bdcf-b10ce676d3ac/extract/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.737044 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:17:36 crc kubenswrapper[4740]: I0130 17:17:36.999783 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:17:37 crc kubenswrapper[4740]: I0130 17:17:37.023197 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:17:37 crc kubenswrapper[4740]: I0130 17:17:37.098434 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:17:37 crc kubenswrapper[4740]: I0130 17:17:37.249768 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-utilities/0.log" Jan 30 17:17:37 crc kubenswrapper[4740]: I0130 17:17:37.509260 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/extract-content/0.log" Jan 30 17:17:37 crc kubenswrapper[4740]: I0130 17:17:37.696234 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-utilities/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.042871 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-content/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.117082 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-content/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.170451 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-utilities/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.252122 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ccqbd_16cbd370-42ef-4109-8c03-15de9af9df16/registry-server/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.418406 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-content/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.459663 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/extract-utilities/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.494097 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cfp2s_2655bd57-bfb0-4633-9f39-a540ba48c452/registry-server/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.520500 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.871182 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.905855 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:17:38 crc kubenswrapper[4740]: I0130 17:17:38.949236 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:17:39 crc kubenswrapper[4740]: I0130 17:17:39.000343 4740 scope.go:117] "RemoveContainer" containerID="89159ddfa27a6f2ae6fe7a7202130f61e55906d0a19d4707f905ef3d14e668b3" Jan 30 17:17:39 crc kubenswrapper[4740]: I0130 17:17:39.215183 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-utilities/0.log" Jan 30 17:17:39 crc kubenswrapper[4740]: I0130 17:17:39.246061 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/extract-content/0.log" Jan 30 17:17:39 crc kubenswrapper[4740]: I0130 17:17:39.342951 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gqx29_74ccccc6-dc66-4346-8d92-b38103ce5d69/marketplace-operator/0.log" Jan 30 17:17:39 crc kubenswrapper[4740]: I0130 17:17:39.900292 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.078013 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5jvn_91bd209c-56bd-4aa3-b454-c05ef1b75167/registry-server/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.146972 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.146998 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.191862 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.547116 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-content/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.547161 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/extract-utilities/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.581372 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.728825 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wbq99_6ca42d55-0b57-49b3-a072-a4b2a91333e1/registry-server/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.770909 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.780508 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.828818 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.879075 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.879135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:40 crc kubenswrapper[4740]: I0130 17:17:40.936469 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:41 crc kubenswrapper[4740]: I0130 17:17:41.146994 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-utilities/0.log" Jan 30 17:17:41 crc kubenswrapper[4740]: I0130 17:17:41.185641 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/extract-content/0.log" Jan 30 17:17:41 crc kubenswrapper[4740]: I0130 17:17:41.426457 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:41 crc kubenswrapper[4740]: I0130 17:17:41.512118 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:41 crc kubenswrapper[4740]: I0130 17:17:41.776800 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wj2w7_fe140203-75a6-4e94-84ab-1645cc026308/registry-server/0.log" Jan 30 17:17:43 crc kubenswrapper[4740]: I0130 17:17:43.371908 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cfp2s" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="registry-server" containerID="cri-o://c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91" gracePeriod=2 Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.312741 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.386173 4740 generic.go:334] "Generic (PLEG): container finished" podID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerID="c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91" exitCode=0 Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.386299 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfp2s" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.386321 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerDied","Data":"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91"} Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.387845 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfp2s" event={"ID":"2655bd57-bfb0-4633-9f39-a540ba48c452","Type":"ContainerDied","Data":"f2716468b09c9137c3c5f3017baf9cf64b2d8e7f134a9d47ad11eb1fa62a004a"} Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.387879 4740 scope.go:117] "RemoveContainer" containerID="c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.423217 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities\") pod \"2655bd57-bfb0-4633-9f39-a540ba48c452\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.423817 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content\") pod \"2655bd57-bfb0-4633-9f39-a540ba48c452\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.424199 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kfl9\" (UniqueName: \"kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9\") pod \"2655bd57-bfb0-4633-9f39-a540ba48c452\" (UID: \"2655bd57-bfb0-4633-9f39-a540ba48c452\") " Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.425304 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities" (OuterVolumeSpecName: "utilities") pod "2655bd57-bfb0-4633-9f39-a540ba48c452" (UID: "2655bd57-bfb0-4633-9f39-a540ba48c452"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.426168 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.424231 4740 scope.go:117] "RemoveContainer" containerID="01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.434789 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9" (OuterVolumeSpecName: "kube-api-access-8kfl9") pod "2655bd57-bfb0-4633-9f39-a540ba48c452" (UID: "2655bd57-bfb0-4633-9f39-a540ba48c452"). InnerVolumeSpecName "kube-api-access-8kfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.499475 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2655bd57-bfb0-4633-9f39-a540ba48c452" (UID: "2655bd57-bfb0-4633-9f39-a540ba48c452"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.508571 4740 scope.go:117] "RemoveContainer" containerID="a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.528891 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kfl9\" (UniqueName: \"kubernetes.io/projected/2655bd57-bfb0-4633-9f39-a540ba48c452-kube-api-access-8kfl9\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.528935 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2655bd57-bfb0-4633-9f39-a540ba48c452-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.562490 4740 scope.go:117] "RemoveContainer" containerID="c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91" Jan 30 17:17:44 crc kubenswrapper[4740]: E0130 17:17:44.563083 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91\": container with ID starting with c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91 not found: ID does not exist" containerID="c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.563151 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91"} err="failed to get container status \"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91\": rpc error: code = NotFound desc = could not find container \"c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91\": container with ID starting with c80404021035ecfb47d2358dcbe022f7e6c54831c172fc869a5b7c6f04092f91 not found: ID does not exist" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.563202 4740 scope.go:117] "RemoveContainer" containerID="01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45" Jan 30 17:17:44 crc kubenswrapper[4740]: E0130 17:17:44.563670 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45\": container with ID starting with 01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45 not found: ID does not exist" containerID="01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.563820 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45"} err="failed to get container status \"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45\": rpc error: code = NotFound desc = could not find container \"01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45\": container with ID starting with 01a9e111af4b922dc2a425cd1965bd478b2962e1de4e65870b3b81555df54d45 not found: ID does not exist" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.563935 4740 scope.go:117] "RemoveContainer" containerID="a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2" Jan 30 17:17:44 crc kubenswrapper[4740]: E0130 17:17:44.564439 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2\": container with ID starting with a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2 not found: ID does not exist" containerID="a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.564494 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2"} err="failed to get container status \"a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2\": rpc error: code = NotFound desc = could not find container \"a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2\": container with ID starting with a48fae25bb0b526c053c1dd5599e7050316ee2799ef345691b8a0587549d7fd2 not found: ID does not exist" Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.764591 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:44 crc kubenswrapper[4740]: I0130 17:17:44.774862 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cfp2s"] Jan 30 17:17:45 crc kubenswrapper[4740]: I0130 17:17:45.349007 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" path="/var/lib/kubelet/pods/2655bd57-bfb0-4633-9f39-a540ba48c452/volumes" Jan 30 17:17:54 crc kubenswrapper[4740]: I0130 17:17:54.454424 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:54 crc kubenswrapper[4740]: I0130 17:17:54.455018 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:17:54 crc kubenswrapper[4740]: I0130 17:17:54.455080 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 17:17:54 crc kubenswrapper[4740]: I0130 17:17:54.456086 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:17:54 crc kubenswrapper[4740]: I0130 17:17:54.456144 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7" gracePeriod=600 Jan 30 17:17:55 crc kubenswrapper[4740]: I0130 17:17:55.510074 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7" exitCode=0 Jan 30 17:17:55 crc kubenswrapper[4740]: I0130 17:17:55.510153 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7"} Jan 30 17:17:55 crc kubenswrapper[4740]: I0130 17:17:55.510723 4740 scope.go:117] "RemoveContainer" containerID="1f504e0010da45e1667e64b832348da4f3b994e566c762c39189fcdd20be752f" Jan 30 17:17:56 crc kubenswrapper[4740]: I0130 17:17:56.523991 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerStarted","Data":"6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6"} Jan 30 17:17:58 crc kubenswrapper[4740]: I0130 17:17:58.509770 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fl52v_7fed297b-1b60-4fa1-81ad-f7aff661624d/prometheus-operator/0.log" Jan 30 17:17:58 crc kubenswrapper[4740]: I0130 17:17:58.552923 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-pd76h_27f815e6-2917-46af-8a6d-4bcd66c35042/prometheus-operator-admission-webhook/0.log" Jan 30 17:17:58 crc kubenswrapper[4740]: I0130 17:17:58.563185 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d697f896c-p2h9b_e70968d1-7497-4724-9c80-cf5abdf288ea/prometheus-operator-admission-webhook/0.log" Jan 30 17:17:58 crc kubenswrapper[4740]: I0130 17:17:58.879822 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-pdgvg_6a0acde2-70b4-4622-a609-290cbc5f253f/operator/0.log" Jan 30 17:17:58 crc kubenswrapper[4740]: I0130 17:17:58.908124 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-r2zbm_522756c7-f451-4879-b2b3-2d19b80cb751/perses-operator/0.log" Jan 30 17:18:15 crc kubenswrapper[4740]: I0130 17:18:15.563195 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/manager/0.log" Jan 30 17:18:15 crc kubenswrapper[4740]: I0130 17:18:15.762370 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-b8b44847-7889n_05857b7d-f148-447a-96bb-d9846ef7402c/kube-rbac-proxy/0.log" Jan 30 17:18:39 crc kubenswrapper[4740]: I0130 17:18:39.090919 4740 scope.go:117] "RemoveContainer" containerID="584360586e65f77c6b9a292de9b8b1868e27dcf4586c913c8061539610a08d70" Jan 30 17:20:24 crc kubenswrapper[4740]: I0130 17:20:24.454700 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:24 crc kubenswrapper[4740]: I0130 17:20:24.455324 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:20:44 crc kubenswrapper[4740]: I0130 17:20:44.525483 4740 generic.go:334] "Generic (PLEG): container finished" podID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerID="9394cbff9164e4bd600bb5d1c4308d3d4b2367c2b3dd82d6baee55c532164c99" exitCode=0 Jan 30 17:20:44 crc kubenswrapper[4740]: I0130 17:20:44.525584 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vcsws/must-gather-sffq6" event={"ID":"50b8cfe1-a001-4ebe-8756-6963c0c9c145","Type":"ContainerDied","Data":"9394cbff9164e4bd600bb5d1c4308d3d4b2367c2b3dd82d6baee55c532164c99"} Jan 30 17:20:44 crc kubenswrapper[4740]: I0130 17:20:44.526784 4740 scope.go:117] "RemoveContainer" containerID="9394cbff9164e4bd600bb5d1c4308d3d4b2367c2b3dd82d6baee55c532164c99" Jan 30 17:20:44 crc kubenswrapper[4740]: I0130 17:20:44.631325 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vcsws_must-gather-sffq6_50b8cfe1-a001-4ebe-8756-6963c0c9c145/gather/0.log" Jan 30 17:20:54 crc kubenswrapper[4740]: I0130 17:20:54.455011 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:54 crc kubenswrapper[4740]: I0130 17:20:54.455566 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:20:58 crc kubenswrapper[4740]: I0130 17:20:58.388863 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vcsws/must-gather-sffq6"] Jan 30 17:20:58 crc kubenswrapper[4740]: I0130 17:20:58.389904 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vcsws/must-gather-sffq6" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="copy" containerID="cri-o://c4a8dafe757f19aa3da49a4d4cc7df531224e2d03e8bf90b0a6c381886423bb2" gracePeriod=2 Jan 30 17:20:58 crc kubenswrapper[4740]: I0130 17:20:58.404847 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vcsws/must-gather-sffq6"] Jan 30 17:20:58 crc kubenswrapper[4740]: I0130 17:20:58.712381 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vcsws_must-gather-sffq6_50b8cfe1-a001-4ebe-8756-6963c0c9c145/copy/0.log" Jan 30 17:20:58 crc kubenswrapper[4740]: I0130 17:20:58.718233 4740 generic.go:334] "Generic (PLEG): container finished" podID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerID="c4a8dafe757f19aa3da49a4d4cc7df531224e2d03e8bf90b0a6c381886423bb2" exitCode=143 Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.277514 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vcsws_must-gather-sffq6_50b8cfe1-a001-4ebe-8756-6963c0c9c145/copy/0.log" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.278166 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.446504 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output\") pod \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.447908 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkdc\" (UniqueName: \"kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc\") pod \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\" (UID: \"50b8cfe1-a001-4ebe-8756-6963c0c9c145\") " Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.455657 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc" (OuterVolumeSpecName: "kube-api-access-vtkdc") pod "50b8cfe1-a001-4ebe-8756-6963c0c9c145" (UID: "50b8cfe1-a001-4ebe-8756-6963c0c9c145"). InnerVolumeSpecName "kube-api-access-vtkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.550794 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkdc\" (UniqueName: \"kubernetes.io/projected/50b8cfe1-a001-4ebe-8756-6963c0c9c145-kube-api-access-vtkdc\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.669541 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "50b8cfe1-a001-4ebe-8756-6963c0c9c145" (UID: "50b8cfe1-a001-4ebe-8756-6963c0c9c145"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.732669 4740 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vcsws_must-gather-sffq6_50b8cfe1-a001-4ebe-8756-6963c0c9c145/copy/0.log" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.733253 4740 scope.go:117] "RemoveContainer" containerID="c4a8dafe757f19aa3da49a4d4cc7df531224e2d03e8bf90b0a6c381886423bb2" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.733308 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vcsws/must-gather-sffq6" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.755813 4740 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/50b8cfe1-a001-4ebe-8756-6963c0c9c145-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:59 crc kubenswrapper[4740]: I0130 17:20:59.759978 4740 scope.go:117] "RemoveContainer" containerID="9394cbff9164e4bd600bb5d1c4308d3d4b2367c2b3dd82d6baee55c532164c99" Jan 30 17:21:01 crc kubenswrapper[4740]: I0130 17:21:01.349243 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" path="/var/lib/kubelet/pods/50b8cfe1-a001-4ebe-8756-6963c0c9c145/volumes" Jan 30 17:21:24 crc kubenswrapper[4740]: I0130 17:21:24.454917 4740 patch_prober.go:28] interesting pod/machine-config-daemon-7c7j6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:21:24 crc kubenswrapper[4740]: I0130 17:21:24.455430 4740 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:21:24 crc kubenswrapper[4740]: I0130 17:21:24.455487 4740 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" Jan 30 17:21:24 crc kubenswrapper[4740]: I0130 17:21:24.456585 4740 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6"} pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:21:24 crc kubenswrapper[4740]: I0130 17:21:24.456666 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerName="machine-config-daemon" containerID="cri-o://6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" gracePeriod=600 Jan 30 17:21:25 crc kubenswrapper[4740]: I0130 17:21:25.004519 4740 generic.go:334] "Generic (PLEG): container finished" podID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" exitCode=0 Jan 30 17:21:25 crc kubenswrapper[4740]: I0130 17:21:25.004599 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" event={"ID":"139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9","Type":"ContainerDied","Data":"6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6"} Jan 30 17:21:25 crc kubenswrapper[4740]: I0130 17:21:25.004672 4740 scope.go:117] "RemoveContainer" containerID="c35794afa994d3ff4e920b0ac1c48c66fea8cc859a436bae72d0968bd9d3eac7" Jan 30 17:21:25 crc kubenswrapper[4740]: E0130 17:21:25.393190 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:21:26 crc kubenswrapper[4740]: I0130 17:21:26.018335 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:21:26 crc kubenswrapper[4740]: E0130 17:21:26.018919 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:21:38 crc kubenswrapper[4740]: I0130 17:21:38.335938 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:21:38 crc kubenswrapper[4740]: E0130 17:21:38.336803 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:21:53 crc kubenswrapper[4740]: I0130 17:21:53.345022 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:21:53 crc kubenswrapper[4740]: E0130 17:21:53.346111 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:22:05 crc kubenswrapper[4740]: I0130 17:22:05.336073 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:22:05 crc kubenswrapper[4740]: E0130 17:22:05.336849 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:22:19 crc kubenswrapper[4740]: I0130 17:22:19.336538 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:22:19 crc kubenswrapper[4740]: E0130 17:22:19.337929 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:22:33 crc kubenswrapper[4740]: I0130 17:22:33.343777 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:22:33 crc kubenswrapper[4740]: E0130 17:22:33.345130 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:22:45 crc kubenswrapper[4740]: I0130 17:22:45.335926 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:22:45 crc kubenswrapper[4740]: E0130 17:22:45.336828 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:22:59 crc kubenswrapper[4740]: I0130 17:22:59.336483 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:22:59 crc kubenswrapper[4740]: E0130 17:22:59.337604 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:23:10 crc kubenswrapper[4740]: I0130 17:23:10.336557 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:23:10 crc kubenswrapper[4740]: E0130 17:23:10.337861 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:23:23 crc kubenswrapper[4740]: I0130 17:23:23.352697 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:23:23 crc kubenswrapper[4740]: E0130 17:23:23.353621 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:23:34 crc kubenswrapper[4740]: I0130 17:23:34.335836 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:23:34 crc kubenswrapper[4740]: E0130 17:23:34.336676 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:23:45 crc kubenswrapper[4740]: I0130 17:23:45.336222 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:23:45 crc kubenswrapper[4740]: E0130 17:23:45.337238 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:23:57 crc kubenswrapper[4740]: I0130 17:23:57.336093 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:23:57 crc kubenswrapper[4740]: E0130 17:23:57.337040 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:24:09 crc kubenswrapper[4740]: I0130 17:24:09.336161 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:24:09 crc kubenswrapper[4740]: E0130 17:24:09.337230 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.445936 4740 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:20 crc kubenswrapper[4740]: E0130 17:24:20.451176 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="copy" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.451481 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="copy" Jan 30 17:24:20 crc kubenswrapper[4740]: E0130 17:24:20.451577 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="gather" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.451641 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="gather" Jan 30 17:24:20 crc kubenswrapper[4740]: E0130 17:24:20.451707 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="extract-utilities" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.451766 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="extract-utilities" Jan 30 17:24:20 crc kubenswrapper[4740]: E0130 17:24:20.451835 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="extract-content" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.451894 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="extract-content" Jan 30 17:24:20 crc kubenswrapper[4740]: E0130 17:24:20.451973 4740 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="registry-server" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.452924 4740 state_mem.go:107] "Deleted CPUSet assignment" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="registry-server" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.453468 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="copy" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.453596 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b8cfe1-a001-4ebe-8756-6963c0c9c145" containerName="gather" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.453696 4740 memory_manager.go:354] "RemoveStaleState removing state" podUID="2655bd57-bfb0-4633-9f39-a540ba48c452" containerName="registry-server" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.455976 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.495467 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv9q\" (UniqueName: \"kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.495626 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.495698 4740 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.498452 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.597649 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.597835 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv9q\" (UniqueName: \"kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.597999 4740 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.598288 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.598621 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.624249 4740 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv9q\" (UniqueName: \"kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q\") pod \"redhat-marketplace-zcnw2\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:20 crc kubenswrapper[4740]: I0130 17:24:20.805270 4740 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:21 crc kubenswrapper[4740]: I0130 17:24:21.332625 4740 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:22 crc kubenswrapper[4740]: I0130 17:24:22.037907 4740 generic.go:334] "Generic (PLEG): container finished" podID="d4fc11bc-2cab-434f-8f6c-4245e0f900e1" containerID="6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae" exitCode=0 Jan 30 17:24:22 crc kubenswrapper[4740]: I0130 17:24:22.038046 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerDied","Data":"6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae"} Jan 30 17:24:22 crc kubenswrapper[4740]: I0130 17:24:22.038226 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerStarted","Data":"8ff59026c7b3515008cba3579cd04a735d4724dadd06b27948c8576f7c9eb9d9"} Jan 30 17:24:22 crc kubenswrapper[4740]: I0130 17:24:22.040491 4740 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:24:22 crc kubenswrapper[4740]: I0130 17:24:22.335809 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:24:22 crc kubenswrapper[4740]: E0130 17:24:22.336150 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:24:23 crc kubenswrapper[4740]: I0130 17:24:23.051804 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerStarted","Data":"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39"} Jan 30 17:24:24 crc kubenswrapper[4740]: I0130 17:24:24.063843 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerDied","Data":"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39"} Jan 30 17:24:24 crc kubenswrapper[4740]: I0130 17:24:24.063698 4740 generic.go:334] "Generic (PLEG): container finished" podID="d4fc11bc-2cab-434f-8f6c-4245e0f900e1" containerID="58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39" exitCode=0 Jan 30 17:24:25 crc kubenswrapper[4740]: I0130 17:24:25.082185 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerStarted","Data":"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92"} Jan 30 17:24:25 crc kubenswrapper[4740]: I0130 17:24:25.103985 4740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcnw2" podStartSLOduration=2.418382556 podStartE2EDuration="5.103952815s" podCreationTimestamp="2026-01-30 17:24:20 +0000 UTC" firstStartedPulling="2026-01-30 17:24:22.040199711 +0000 UTC m=+5310.677262310" lastFinishedPulling="2026-01-30 17:24:24.72576997 +0000 UTC m=+5313.362832569" observedRunningTime="2026-01-30 17:24:25.101832501 +0000 UTC m=+5313.738895130" watchObservedRunningTime="2026-01-30 17:24:25.103952815 +0000 UTC m=+5313.741015404" Jan 30 17:24:30 crc kubenswrapper[4740]: I0130 17:24:30.806372 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:30 crc kubenswrapper[4740]: I0130 17:24:30.806986 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:30 crc kubenswrapper[4740]: I0130 17:24:30.859422 4740 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:31 crc kubenswrapper[4740]: I0130 17:24:31.196135 4740 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:31 crc kubenswrapper[4740]: I0130 17:24:31.263208 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:33 crc kubenswrapper[4740]: I0130 17:24:33.164532 4740 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcnw2" podUID="d4fc11bc-2cab-434f-8f6c-4245e0f900e1" containerName="registry-server" containerID="cri-o://f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92" gracePeriod=2 Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.007699 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.118490 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content\") pod \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.118585 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities\") pod \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.118795 4740 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjv9q\" (UniqueName: \"kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q\") pod \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\" (UID: \"d4fc11bc-2cab-434f-8f6c-4245e0f900e1\") " Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.120229 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities" (OuterVolumeSpecName: "utilities") pod "d4fc11bc-2cab-434f-8f6c-4245e0f900e1" (UID: "d4fc11bc-2cab-434f-8f6c-4245e0f900e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.132983 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q" (OuterVolumeSpecName: "kube-api-access-xjv9q") pod "d4fc11bc-2cab-434f-8f6c-4245e0f900e1" (UID: "d4fc11bc-2cab-434f-8f6c-4245e0f900e1"). InnerVolumeSpecName "kube-api-access-xjv9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.177732 4740 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4fc11bc-2cab-434f-8f6c-4245e0f900e1" (UID: "d4fc11bc-2cab-434f-8f6c-4245e0f900e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.180899 4740 generic.go:334] "Generic (PLEG): container finished" podID="d4fc11bc-2cab-434f-8f6c-4245e0f900e1" containerID="f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92" exitCode=0 Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.180958 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerDied","Data":"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92"} Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.180991 4740 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcnw2" event={"ID":"d4fc11bc-2cab-434f-8f6c-4245e0f900e1","Type":"ContainerDied","Data":"8ff59026c7b3515008cba3579cd04a735d4724dadd06b27948c8576f7c9eb9d9"} Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.181012 4740 scope.go:117] "RemoveContainer" containerID="f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.181199 4740 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcnw2" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.217042 4740 scope.go:117] "RemoveContainer" containerID="58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.224145 4740 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.224188 4740 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.224201 4740 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjv9q\" (UniqueName: \"kubernetes.io/projected/d4fc11bc-2cab-434f-8f6c-4245e0f900e1-kube-api-access-xjv9q\") on node \"crc\" DevicePath \"\"" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.230085 4740 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.240027 4740 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcnw2"] Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.260114 4740 scope.go:117] "RemoveContainer" containerID="6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.312803 4740 scope.go:117] "RemoveContainer" containerID="f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92" Jan 30 17:24:34 crc kubenswrapper[4740]: E0130 17:24:34.313844 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92\": container with ID starting with f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92 not found: ID does not exist" containerID="f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.314002 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92"} err="failed to get container status \"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92\": rpc error: code = NotFound desc = could not find container \"f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92\": container with ID starting with f2e49fc3e2cce14900db46c74b85993c30074bf2f0d98db536357c849cc03f92 not found: ID does not exist" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.314130 4740 scope.go:117] "RemoveContainer" containerID="58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39" Jan 30 17:24:34 crc kubenswrapper[4740]: E0130 17:24:34.314870 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39\": container with ID starting with 58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39 not found: ID does not exist" containerID="58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.315172 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39"} err="failed to get container status \"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39\": rpc error: code = NotFound desc = could not find container \"58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39\": container with ID starting with 58051d15c64bc1b5bddd1a605ffde0a75afd693bce7ede3f1c0d1eedaa6f2f39 not found: ID does not exist" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.315267 4740 scope.go:117] "RemoveContainer" containerID="6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae" Jan 30 17:24:34 crc kubenswrapper[4740]: E0130 17:24:34.315797 4740 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae\": container with ID starting with 6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae not found: ID does not exist" containerID="6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.315902 4740 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae"} err="failed to get container status \"6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae\": rpc error: code = NotFound desc = could not find container \"6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae\": container with ID starting with 6a28f06d9c1149d06b52710df2ff35f04c41b1547d536151b2f8ad165058deae not found: ID does not exist" Jan 30 17:24:34 crc kubenswrapper[4740]: I0130 17:24:34.336882 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:24:34 crc kubenswrapper[4740]: E0130 17:24:34.337199 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:24:35 crc kubenswrapper[4740]: I0130 17:24:35.348496 4740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fc11bc-2cab-434f-8f6c-4245e0f900e1" path="/var/lib/kubelet/pods/d4fc11bc-2cab-434f-8f6c-4245e0f900e1/volumes" Jan 30 17:24:49 crc kubenswrapper[4740]: I0130 17:24:49.336629 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:24:49 crc kubenswrapper[4740]: E0130 17:24:49.337571 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:25:01 crc kubenswrapper[4740]: I0130 17:25:01.335747 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:25:01 crc kubenswrapper[4740]: E0130 17:25:01.336700 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:25:12 crc kubenswrapper[4740]: I0130 17:25:12.335338 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:25:12 crc kubenswrapper[4740]: E0130 17:25:12.336148 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" Jan 30 17:25:25 crc kubenswrapper[4740]: I0130 17:25:25.335898 4740 scope.go:117] "RemoveContainer" containerID="6ed98dde93bf79179ad07fca18f4855f0cf2af8075a909a5e2d08f65ba250ee6" Jan 30 17:25:25 crc kubenswrapper[4740]: E0130 17:25:25.336953 4740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7c7j6_openshift-machine-config-operator(139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9)\"" pod="openshift-machine-config-operator/machine-config-daemon-7c7j6" podUID="139658c1-36a2-4af9-bdfd-2bc3f9e6dcc9" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137164623024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137164624017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137151656016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137151656015467 5ustar corecore